Hoppa till huvudinnehållet
Till KTH:s startsida Till KTH:s startsida

2022-2023 Master Projects

Brief Background

In recent years, the field of explainable AI (XAI), with the goal of making machine learning algorithms more transparent, has been rapidly growing. However, the majority of the developments in XAI are focused on domain-generated explanations which can often fail to generate appropriate explanations in human-centred applications. The seminal work by Miller [1] bridges the gap between the social sciences' view on explanation and ML-based generated explanations. When targeting human-centred scenarios, making autonomous systems more explainable is as crucial as basing the explanations on understanding the human perspective and their mental models of the system [2]. Aligned with this view, we have defined 3 master projects that each approach the problem from different directions. 

 

Project 1: Modelling humans’ perception of robots with Bayesian inference

This project is focused on developing a Bayesian approach to infer humans’ perceptions of the robot’s abilities and compare them against the robot’s true abilities and iteratively update the model throughout interaction as the robot and human become more aware of each other’s skills [3]. In the initial testing, we need to develop a task that enables us to collect quantitative data regarding the human’s perception of the robot throughout the task. The robot might directly or indirectly provide information to the human about its abilities, or the abilities might become apparent to the human through the task itself. The way the robot provides an explanation can be inspired by the research in XAI in goal-driven or data-driven explanations [4]. However, the focus of this project is to incorporate Bayesian inference into the loop. 

Useful skills:

Basic knowledge of machine learning

- Basic familiarity with a programming language (ideally python)

- Interest in working with new datasets

- Interest in human-robot interaction

 

Project 2: Adapting robot failure explanations to the user's need

With the rise of the integration of robots and intelligent systems in our daily life, we are also more likely to experience erroneous failures with robots. The field of explainable AI is set to improve the interpretability of AI systems by providing explanations to domain experts. However, when robots or AI interact with non-expert users, particularly in cases where failure happens the explanations need to be adapted accordingly [5].

The project builds upon a dataset going to be collected from several rounds of human-robot interactions, where in each round the human faces at least two robot failures. The dataset is collected for 5 experimental conditions where in each condition, the extent of explanations given by the robot changes in each interaction round. This project involves finding ways to annotate the collected dataset, finding the needed features, and building a model that can learn to adapt the extent of explanations given to the human, to the human task performance and understanding. The data is collected through multimodal channels and involves, offline tracking of the face, body, and gaze. 

Useful skills:

Basic knowledge of machine learning

- Basic familiarity with a programming language (ideally python)

- Interest in working with new datasets

- Interest in human-robot interaction

 

Project 3: Human Factors involved in explainability in Autonomous driving

With the rise of research and deployment of autonomous vehicles with computationally powerful AI, the field of explainability in AI has been actively working on how to make AI systems of autonomous vehicles more explainable. This seems to be particularly essential from the legal and regulatory aspects of deploying autonomous vehicles, where their decisions are explainable and transparent [6]. This project is focused on 

1) investigating what type of explainability the user (driver, other divers, pedestrians) needs depending on the level of vehicle autonomy, and 

2) develop and run an experiment where different scenarios with or without those explanations are tested. 

The project can also include research on how the AI of the autonomous vehicle could potentially predict when explanations are needed and provide them to the user without prompt. 

Useful skills:

Basic knowledge of machine learning

- Basic familiarity with a programming language (ideally python)

- Interest in working with new datasets

- Interest in human-robot interaction

 

References: 

[1] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.

[2] FeldmanHall, O., & Shenhav, A. (2019). Resolving uncertainty in a social world. Nature human behavior, 3(5), 426-435.

[3] Devaine, M., Hollard, G., & Daunizeau, J. (2014). The social Bayesian brain: does mentalizing make a difference when we learn?. PLoS computational biology10(12), e1003992.

[4] Sakai, T., & Nagai, T. (2022). Explainable autonomous robots: a survey and perspective. Advanced Robotics36(5-6), 219-238.

[5] Das, D., Banerjee, S., & Chernova, S. (2021, March). Explainable ai for robot failures: Generating explanations that improve user assistance in fault recovery. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (pp. 351-360).

[6] Atakishiyev, S., Salameh, M., Yao, H., & Goebel, R. (2021). Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions. arXiv preprint arXiv:2112.11561.


Profilbild av Elmira Yadollahi

Portfolio