Till innehåll på sidan
Till KTH:s startsida

Spv: Anna-Kaisa Kaila

Anna-Kaisa Kaila
Anna-Kaisa Kaila doktorand

Project 1: Mitigating ethical harms from creative AI

Background

Artificial intelligence (AI) plays an increasingly prominent role in various types of creative media and art practices, but the study of the ethical implications in these domains is just getting started. Your thesis work will contribute with ideas for how to mitigate risks and harms of AI for creative work, or help establish a roadmap for more responsible creative AI.

Task

This project explores a selection of social and ethical aspects of current creative AI development or use practices. The domain of interest can be, for instance, music, visual arts, performing arts, digital arts, or something else you want to focus on. The purpose of this project is to provide a proposal for an intervention that would steer the creative AI field towards a more ethical and inclusive future. For instance, you could:

  • develop and test an ethical analysis tool for creative-AI developers (see Kaila et al 2023)

  • conduct an analysis of a dataset typically used for training generative AI models

  • explore and suggest a set of criteria for a (future) AI-fairness certification in a specific domain

  • suggest a method for artists to protect their work against unauthorised data use

  • or provide some other initiative for mitigating harm and promoting ethical and socially sustainable development and use practices in creative-AI domains.

Or feel free to suggest your own ideas! In your thesis work, you will likely address questions of authorship/ownership, access/exclusion, enrichment/exploitation, fairness, or diversity in data and/or in art. The proposal can be targeted at creative-AI developers, artist communities, or other relevant stakeholder groups.

Methods

The applicable method depends on the chosen research question, but in most cases you will have to ground your proposal in an analysis of the current state of affairs by conducting a mapping of relevant AI-tools and/or an ethnographic study of either their developer or user communities. This may include, for example, interviews or surveys, workshops, or ethnographic analysis of online platforms. The project will then result in a proposal for an intervention, and discussion of the conditions in which it may be applicable. Critical reflection on the political, economic, technological, and ideological contexts of AI tools and AI art production and reception is encouraged.

Initial references

  • Selection of creative AI tools and services available at www.futurepedia.io/

  • Harry H. Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. AI Art and its Impact on Artists. In Proceedings of AIES '23. DOI: doi.org/10.1145/3600211.3604681

  • Anna-Kaisa Kaila, Petra Jääskeläinen and Andre Holzapfel. 2023. Ethically Aligned Stakeholder Elicitation (EASE): Case Study in Music-AI. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME). www.nime.org/proceedings/2023/nime2023_18.pdf

  • Fabio Morreale. 2021. Where Does the Buck Stop? Ethical and Political Issues with AI in Music Creation. Transactions of the International Society for Music Information Retrieval, 4(1), pp. 105–113. DOI: doi.org/10.5334/tismir.86

  • D'Ignazio, Catherine; Klein, Lauren F. (2020). Data Feminism. The MIT Press. doi.org/10.7551/mitpress/11805.001.0001

  • Simon Lindgren 2024. Critical Theory of AI. Cambridge: Polity.

Supervisor: Anna-Kaisa Kaila

Project 2: AI art reception


Background

Research on the use side of AI art has been under-explored in comparison to the development of tools for AI content generation. This project focuses on the reception and critique of AI art, taking a specific AI artwork or performance as a case study. Looking closely into the values presented and tensions emerging in how AI art is presented and critiqued demonstrates how creative-AI tools and their output enters the social world of the art institutions and audiences.

Task

This project explores the cultural context of AI art and its reception by analysing public reviews and critiques of a selected (AI) artwork or a performance. You can take into various analytical perspectives (e.g. historical, economical, geographical, political, critical, legal, labour-related, technological, scientific, aesthetical, mythological, bodily, narrative, or symbolic). The reviews can also be contrasted for example with reviews of historical computational artworks, with process descriptions of the AI systems provided by the artists and developers (see Gotham et al 2022, Colton et al 2022), or with perspectives collected from producers, curators, and other gatekeepers of art institutions through interviews. The analysis may also expose social negotiations about questions of authorship, authenticity and algorithmic (co-)creativity in the era of AI art.

Methods

In this project, you can use thematic and/or content analysis of published critiques, carry out online ethnographies, or conduct interviews or surveys, or a combination of these. The case study examples could include the AI-musical Beyond the Fence, the KTH-produced opera The Tale of the Great Computing Machine (reviews available in Swedish), or any other relatively recent static, performative or musical artwork you find interesting.

In the analysis of the media texts, you could focus on the topics discussed and arguments presented; agencies (who is talking), patterns, and styles of the discourse; meanings and narratives expressed and negotiated; and values presented (or missing!). Pay close and critical attention to the wider political, economic, technological, and ideological contexts of AI tools and AI art production and reception.

Initial references

  • Gotham, M. et al. 2022. Beethoven X: Es könnte sein! (It could be!). Proceedings of the 3rd Conference on AI Music Creativity 2022. DOI: doi.org/10.5281/zenodo.7088335

  • Colton, Simon, et al. 2022. The beyond the fence musical and computer says show documentary. arXiv preprint arxiv.org/abs/2206.03224

  • Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2022. The Values Encoded in Machine Learning Research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 173–184. doi.org/10.1145/3531146.3533083

Supervisor: Anna-Kaisa Kaila