Skip to main content
To KTH's start page

A critical eye can make AI better

Child and robot
Should humans adapt to AI or should it be the other way round? (Photo: Mostphotos)
Published Apr 03, 2025

Are we too blind to the development of artificial intelligence? With a more questioning view, we can come up with AI solutions that take into account more voices and perspectives, according to technology history researchers.
"Critical research exists, but it often has very little power over technological development," says Lina Rahm, who studies how AI solutions are used in practice.

About Lina Rahm

Portrait

Lina Rahm is Assistant Professor in the History of Media and Environment with specialization in Artificial Intelligence and Autonomous Systems. Her research is focused on sociotechnical and educational imaginaries. Read more about her research here

Robot colleagues, literacy technologies and digital assistants - more and more AI tools are entering our lives.

But if the technology is to benefit us in the best possible way, we need a better holistic approach and more scientific fields to be involved when AI systems are developed, says Rahm. She wants to see both increased collaboration between different disciplines and more researchers benefiting from previous research.

Lina Rahm studies AI and autonomous systems from a user perspective - human and societal - and with a historical perspective, on the consequences of the use of technology from the 1950s onwards.

Several of her studies are related to education. The latest was about a library robot designed to make children love books and practise reading aloud. The technology was presented as self-sufficient, but in practice needed a lot of manual work.

"This is something that often happens with new technology in schools and reading. The teacher gets a lot of extra work in connection with the new tools, a fact that school research has shown since the 1970s."

Lot of extra work

Often the view of the user, in this case the teacher, is flawed:

"There is a too narrow view, an idea of what teachers do in their profession without basing it on research."

By critically examining AI, she wants to highlight power structures: who benefits and who is marginalised by technological advances. Teachers risk losing their independence when AI tools take over the classroom, she points out.

Robot storage
Reading robot waiting for a job in a library in Stockholm (Photo: KTH)

"They lose influence over the organisation of work, which becomes a technical issue rather than a matter of the teacher's pedagogical competence."

A pitfall in AI

There are various reasons for automating teachers' work: teacher shortages, efficiency gains, making work easier:

"But rarely are teachers asked what kind of work they want to automate."

A pitfall in AI development, she says, can be that it tries to solve technical problems by configuring the user instead of the technology, for example adapting the teacher to the AI tool, and not the other way round.

Rahm wants to highlight how power shapes technological development. AI cannot be understood as a neutral force, but must be scrutinised critically because its potential and risks aredeeply interwoven with society's power structures and ways of looking at knowledge, she says.

"The introduction of new technologies is in itself a way of reshaping society, and here humanities and social science research with a critical perspective can be used to interpret and better understand complex social processes."

Text: Christer Gummeson ( gummeson@kth.se )

Some critical studies of technology development in which Lina Rahm has participated.

Page responsible:redaktion@kth.se
Belongs to: About KTH
Last changed: Apr 03, 2025