‘Build AI that supports collective well-being’
Theme: Tech for whom?
On a quest to create more inclusive AI systems, the KTH researcher Amir H. Payberah explores equity and justice in the development of large language models. His goal is to challenge the structural silences embedded deep within these systems.
Amir H. Payberah is an Associate Professor of Computer Science at KTH with a background in technology, distributed algorithms, and AI systems. He has increasingly questioned how AI, and in particular large language models (LLMs), are developed. These are computer programs that learn patterns in text and use them to create new content.
“Over time, I saw a big gap between the work I was doing and its real impact on society and the environment. That is why I shifted my focus and began looking at technology through a lens of critical consciousness and intersectional* feminism, asking not just what we build, but who it benefits, who is excluded, and what societal costs it creates,” says Payberah.
For Amir H. Payberah, the question is not whether we should use AI systems, but how they can be developed and used differently, in collaboration with the communities they affect.
“I am interested in alternative approaches that place values like justice, co-liberation, care, and equity at the centre, so that AI supports collective well-being rather than reproducing existing inequalities,” he says.
Structural omissions
In the project ‘The Missing Data’, Amir H. Payberah works with KTH researcher Lina Rahm, specialising in Artificial Intelligence and Autonomous Systems.
The project’s name refers to structural data omissions embedded inthe development, implementation, and evaluation of AI systems – reinforcing broader patterns of inequality. The project focuses on three often overlooked areas: marginalised experiences, invisible labour, and environmental impact.
“One example is how data on gender-based violence or unpaid care work is often missing from public datasets. This invisibility shapes policies and resource allocation in ways that reinforce inequality. A similar omission appears in disaster-response systems, which often map roads and buildings but lack data on people without cars or with caregiving responsibilities. As a result, evacuation tools can overlook those most at risk.”
The project includes building a first test tool, a so-called proof-of-concept, that uses LLMs to identify 'expected-but-missing' relationships in datasets.
“By putting lived experiences at the centre, combining technical and qualitative methods, and working closely with community partners, we aim to build tools and frameworks that make AI systems more accountable, inclusive, and socially grounded,” says Payberah.
Other activities
Harms can also arise from collecting more data, rather than less. Detailed user data can, for example, enable surveillance or control over already vulnerable communities. At the same time, AI development has the potential to support participation and liberation. Last year, Amir H. Payberah initiated ‘Co-liberative Computing’, a research group at KTH committed to promoting critical consciousness within computer science education and research.
“I wanted to create a space where we build critical awareness and place justice at the centre of computer science — not as an add-on, but as a core principle. The co-liberative part is essential here, as justice and liberation only work when they are for all of us,” says Payberah.
You teach a PhD course on Data Feminism, what feedback do you get?
“I have received very positive feedback. We discuss topics that are often overlooked in tech universities. The growing number of PhD students joining this course from different universities across Sweden shows how much a space for this kind of discussions was missing.”
Text: Alexandra von Kern