Du är inte inloggad på KTH så innehållet är inte anpassat efter dina val.
Gruppwebben kommer tas bort under höstterminen 2026. Från och med den 10 november upphör möjligheten att skapa nya gruppwebbar.
Behöver du en ny samarbetsyta? Läs mer i denna nyhet och hitta alternativ.

Welcome to the SCS Seminar series. Every second week, we will have an exciting seminar with invited speakers at the SCS Division. Unless stated otherwise, the seminars will be hybrid, in-person in room Ada at KTH Kista and over Zoom.
Organizer and contact person: Amir H. Payberah.

Title: Bayesian deep learning in the era of large-scale models
Speaker: Martin Trapp, Assistant Professor at the Division of Software and Computer Systems at KTH
Bio: Martin Trapp is an Assistant Professor in machine learning at KTH and a member of the ELLIS society. Before joining KTH, he was an Academy of Finland postdoctoral researcher at Aalto University. His research is centred around representing, quantifying, and reducing uncertainties to make machine learning more trustworthy. He is particularly interested in efficient and principled approaches for large-scale models such as LLMs & VLMs.
Abstract: In recent years, there has been a growing interest in improving the reliability and trustworthiness of machine learning models by quantifying their predictive uncertainties. This has led to a series of approaches, including techniques that apply the Bayesian principle to estimate uncertainties over the neural network's posterior. Popular examples of this line of work are ensembling techniques, such as deep ensembles and Monte Carlo dropout. However, those techniques typically require either changing the model architecture or the optimisation and often introduce substantial computational and memory overheads, making them impractical for contemporary large-scale models. Fortunately, recent techniques on post-hoc posterior estimation present an interesting opportunity for those model classes, as they do not require changes to the model architecture or the optimisation procedure. However, scaling those techniques to large neural networks, such as LLMs or VLM, can be challenging. In this talk, I will discuss two recent papers that aim to make post-hoc methods for Bayesian deep learning practically relevant for downstream tasks. First, I will present our ICLR 2025 paper, “Streamlining Prediction in Bayesian Deep Learning" (https://arxiv.org/abs/2411.18425), which demonstrates how to analytically propagate uncertainties through a network function, providing a general framework for performing predictions in Bayesian deep learning with minimal to no overhead. Secondly, I will discuss our recent ArXiv paper, “Probabilistic Post-hoc Vision-Language Models” (https://arxiv.org/abs/2412.06014), in which we show how to utilise the Bayesian principle in VLMs for efficient and effective uncertainty quantification and reduction. Lastly, I will highlight ongoing work that extends beyond post-hoc techniques and aims to leverage the specificities of large-scale deep learning models for effective uncertainty quantification.

Title: Graph-Based Machine Learning for Safer Drug Discovery
Speaker: Golnaz Taheri, Assistant Professor at the Division of Computational Science and Technology at KTH
Bio: Golnaz Taheri is an Assistant Professor at KTH and a Data-Driven Life Science (DDLS) Fellow at SciLifeLab and the Wallenberg Program. Golnaz holds a PhD in Computer Science and previously served as an Assistant Professor in the Data Science Research Group at Stockholm University. Her research focuses on advancing machine learning methodologies and applying them to large-scale biological data, with the aim of deepening our understanding of biological systems and supporting progress in areas such as personalized medicine and cancer genomics.
Abstract: Polypharmacy, or the use of multiple drugs for complex conditions, poses a serious risk of harmful drug–drug interactions. Traditional lab-based detection is slow, while existing computational methods often fail to exploit the power of graph modeling. We developed a machine learning framework that leverages a drug–target–protein interaction network to capture complex biological relationships. By integrating topological features from this network with intrinsic drug characteristics, our computational approach boosts predictive accuracy from 0.64 to 0.91. This shows how graph-based machine learning frameworks can drive more accurate drug interaction prediction and enable safer, data-driven drug discovery.

Title: Vertical Federated Learning and Mixture of Experts for Cybersecurity
Speaker: Gianluigi Folino, Senior Researcher at ICAR-CNR
Bio: Gianluigi Folino received a Ph.D. in physics, mathematics, and computer science from Radboud University in Nijmegen (Holland), The Netherlands. Since 2001, he has been a senior researcher at the Institute of High Performance Computing and Networking, Italian National Research Council (ICAR-CNR), Rende, Italy. He is also a Lecturer at the University of Calabria. Within ICAR-CNR, he has been the coordinator of several national and international research/industrial projects: “Cyber Security – Digital and Electronic Payment Services Protection” in 2013 and currently for the SPOKE 1 (Digital Sovereignty) of the PNRR Project SEcurity and RIghts in the CyberSpace (SERICS). He published more than 150 papers in international conferences and journals among which the IEEE Transactions on Evolutionary Computation, IEEE Transactions on Knowledge and Data Engineering, Information Sciences, Information Fusion and Bioinformatics. His research interests focus on applications of distributed computing and data mining in the areas of cybersecurity, big data, and bioinformatics.Dr. Folino is on the Editorial Board of Applied Soft Computing (Elsevier).
Abstract: Data sovereignty and regulations, along with growing concerns over privacy and security, underscore the limitations of centralized machine learning (ML) in sensitive domains like cybersecurity. Federated Learning (FL) has emerged as a promising paradigm, enabling the collaborative training of global models without sharing raw data, thereby aligning with privacy and sovereignty requirements while meeting the demand for advanced ML analytics. This talk addresses these challenges and presents a framework based on sparse Mixture of Experts (MoE) architectures for FL in vertically federated settings, where parties hold complementary subsets of features. Sparse MoEs improve computational and energy efficiency by selectively activating experts and leveraging conditional computation. The framework mitigates risks of information leakage and reduces communication costs, supporting efficient model training and deployment. Additionally, the talk explores key attack scenarios, defense strategies, and efficient methods for distributing the VFL paradigm with minimal communication overhead.

Title: Multilingual Language Models: Studies of Pre-Training Approaches and Hallucination Detection
Speaker: Evangelia (Evi) Gogoulou, Senior Data Scientist at SEBx
Bio: Evi Gogoulou is a Senior Data Scientist at SEBx, working on the intersection between AI technologies and financial services. Previously, she was an AI Researcher at RISE Research Institutes of Sweden, with focus on language technology. She holds a PhD in Natural Language Processing from KTH Royal Institute of Technology on the topic of large-scale multilingual language models. Her PhD thesis was focused on studying different approaches for pre-training multilingual language models, as well as investigating the problem of hallucinations in machine-generated content.
Abstract: The performance of large language models varies significantly across languages, highlighting the importance of cross-lingual transfer for improving low-resource language capabilities. This PhD thesis investigates how language interactions during pre-training affect model performance across different training schemes, architectures, and evaluation criteria. Through experiments on multilingual joint pre-training and incremental language pre-training, we analyze the forward and backward transfer effects and identify key influencing factors, such as language similarity and contamination. Additionally, we evaluate multilingual models on hallucination detection tasks, revealing the impact of model-specific factors like size and instruction tuning. These findings enhance the understanding of cross-lingual transfer, guiding the development of multilingual models with improved learning capacity. Additionally, our work provides resources and methods for evaluating hallucinations in machine-generated text. The comprehensive summary of my thesis can be found here.

Title: What's Cooking in the JVM?
Speaker: Daniel Lundén, Roberto Castañeda Lozano at Java Platform Group, Oracle
Bio: Daniel is a software developer and researcher with a particular interest in programming language theory, compilers, and static program analysis. He is currently working as a software developer in the HotSpot compiler team at Oracle. Daniel received his PhD degree in Information and Communication Technology in 2023 from KTH Royal Institute of Technology, Stockholm, Sweden.
Roberto is a software engineer and researcher with an interest in programming languages, software testing and verification, and combinatorial optimization. He currently works at Oracle as an OpenJDK compiler engineer. Before joining the industry, he worked as a researcher at RISE (called Swedish Institute of Computer Science at that time) and at the University of Edinburgh. Roberto attained his doctoral degree in 2018 at KTH Royal Institute of Technology.
Abstract: Contrary to what might be thought about a managed runtime environment that has been ubiquitous during almost three decades, the Java Virtual Machine (JVM) is constantly evolving, adapting to changes, and adopting new ideas and paradigms. A significant part of this development happens a stone's throw from Kista. The Stockholm office of the Java Platform Group at Oracle comprises a sizeable group of programming language implementation engineers, researchers, and students, and is involved in virtually all past, present, and future enhancements of the JVM. This talk gives an overview of the JVM architecture, its recent and future enhancements, and related research activities. The talk focuses on projects where the Stockholm office plays a key role, such as designing and implementing a fully concurrent garbage collector, improving escape analysis in the C2 ("server") JIT compiler, reducing memory footprint of Java objects, and extending Java with "value objects" (objects that combine some of the abstraction features of regular Java objects with the implementation efficiency of primitive types).

Title: Unblocking AI: Understanding and Overcoming Datacenter Network Bottlenecks in Distributed AI
Speaker: Soudeh Ghorbani, Research scientist at Meta and a faculty member at Johns Hopkins
Bio: Soudeh Ghorbani is a faculty member at Johns Hopkins University and, until January 2025, was concurrently a scientist at Meta. She leads the Foundational Networked Systems Lab, where her group's research focuses on analyzing and designing large-scale networks, particularly datacenter networks. Their work has been recognized with various awards and grants from Intel, Google, Meta, Microsoft, and the National Science Foundation.
Abstract: As companies invest heavily in building AI-dedicated datacenters, a critical yet often underestimated challenge persists: datacenter networks remain a major bottleneck in distributed AI training. Despite advances in computing hardware and machine learning algorithms, network congestion and communication overhead continue to limit the scalability and efficiency of AI workloads. In this talk, I will share insights from a comprehensive study I led, in which a team of researchers and engineers instrumented networks and analyzed traffic patterns across 20+ AI datacenters of a hyperscaler. Our investigation uncovered key insights into AI workload characteristics, the root causes of network bottlenecks, and the challenges in mitigating them. Building on our findings, I will introduce novel datacenter designs that challenge traditional paradigms, such as shortest-path routing and strict packet ordering, by embracing more flexible network strategies. I will demonstrate how these techniques effectively pinpoint and resolve network bottlenecks, leading to significant performance improvements. I will conclude by discussing open research questions and future directions in optimizing AI datacenter networks.

Title: Automated Neural Network Design for Telecommunications (50% Seminar)
Speaker: Adam Orucu, Industrial PhD Student at KTH and Ericsson AB
Bio: Adam is an industrial PhD student at KTH and Ericsson. His research goal is to automatically find the best-suited neural network architectures in dynamically changing telco environments. To accomplish this, his work concerns topics of Neural Architecture Search and Transfer Learning. Adam received his MSc in Data Science from Uppsala University in 2022 and his BSc in Computer Science from Wrocław University of Science and Technology in 2020.
Abstract: The telecommunications industry is experiencing rapid growth in the adoption of deep learning for critical tasks such as traffic prediction, anomaly detection, and quality of service optimization. However, designing and training efficient neural network architectures for these applications remains challenging and time-consuming, particularly when targeting compact models suitable for resource-constrained network environments. Therefore, there is a need for automating the process of designing and training neural networks. This work explores the automation of neural network design for telecommunications through the use of neural architecture search (NAS) and transfer learning (TL). We aim to automate the design process by searching for effective architectures and reusing knowledge from existing models. We begin by analysing existing NAS methods and presenting our extensions for multi-objective tasks in telecommunication use cases. Further, we introduce a novel NAS method that searches for multi-layer perceptron architectures. This method outperforms the state-of-the-art on three critical dimensions: it (1) finds architectures up to 30 times faster, (2) achieves lower prediction error, and (3) produces significantly smaller models.

Title: Towards Formal Verification of Bayesian Inference in Probabilistic Programming
Speaker: Fabian Zaiser, Research Scientist at MIT
Bio: Fabian Zaiser is a research scientist working with the Probcomp Lab at MIT, led by Vikash Mansinghka. Previously, he was a PhD student at the University of Oxford under the supervision of Luke Ong and Andrzej Murawski. His interests are in probabilistic programming, particularly improving Bayesian inference and verifying properties of probabilistic programs.
Abstract: Probabilistic programs express (Bayesian) statistical models as programs by extending conventional programming languages with constructs for drawing samples and observing data from distributions. This enables the automation of Bayesian inference, i.e. computing the posterior distribution conditioned on the observations. Since exact inference is often intractable, practitioners generally rely on Monte-Carlo methods to produce approximate samples from the posterior distribution. Unfortunately, the convergence of such approximations may be slow or depend on assumptions that do not hold in practice. Program analysis and formal methods can be used to compute guaranteed bounds on the posterior distribution, and thus to debug problems with approximate inference. In this talk, I will present two such approaches from my PhD research: the first uses abstract interpretation with interval arithmetic and the second uses a compositional family of bounds termed “eventually geometric distributions”.

Title: One Problem, Two Lenses: Community Detection and Clustering Algorithms for Detecting Nestedness in Networks
Speaker: Imre Gera, Data Scientist at Volumental
Bio: Imre Gera is a Data Scientist at Volumental (https://volumental.com). He defended his PhD thesis in 2025 at the University of Szeged, Hungary, with a dissertation titled "Beyond Dense Subgraphs: Nestedness, Hierarchies, and Community Structures in Complex Networks". His research focuses on developing community detection and clustering algorithms for specialized graph structures, with applications in areas such as portfolio optimization.
Abstract: Network science has a large toolkit for discovering patterns in graphs, but it is not always as straightforward as just picking an algorithm and going. Network patterns can overlap, and it makes a huge difference whether we factor that overlap into our solution. It not only translates to performance but also to the amount of information we extract and the methods we then have to analyze the results. Community detection in network science doesn't always have a clear definition, and overlaps are sometimes factored into it, sometimes they aren't. To this end, I present three solutions for detecting a rarely examined network pattern called "nestedness": two overlapping community detection algorithms (a heuristic and a full one) and one clustering algorithm. I show that while the network structure under investigation is the same, our view of the problem greatly influences what information we can extract from the graph.

Title: Music composition aided by symbolic AI
Speaker: Peter Van Roy, Professor of Computing Science and Engineering, Université Catholique de Louvain (UCL)
Bio: Peter Van Roy is full professor of Computing Science and Engineering at the Université catholique de Louvain (UCL) in Louvain-la-Neuve, Belgium. He is well-known for the textbook "Concepts, Techniques, and Models of Computer Programming", which explains many difficult programming concepts in a simple and insightful way. His research is focused on the general theme of increasing the expressive power of programming languages, with a special focus on large-scale distributed computing. He uses a combination of theory and practical system building to understand how to simplify programming and bring it to a higher level. He is a developer of the Mozart Programming System, a high-quality open-source development platform based on the Oz multiparadigm programming language, which he often uses as a research vehicle to explore and test new ideas.
Abstract: Music has a deep connection to the human condition. Composing music is like composing poetry: it is difficult but when it succeeds it touches us deeply. Is there a way that we can use computers to help composers? We would like to amplify a composer’s creativity and not replace it; we want the musical ideas to come from the composer. Since music is highly combinatorial, we propose to use constraint programming to respect the rules of a musical style. We focus on two musical styles, classical counterpoint according to Johann Joseph Fux and modern Western tonal music. We formalize each style as a constraint model. The composer enters musical ideas, which are translated into constraints and added to the model. The tool then does the heavy lifting of molding the ideas into a concrete musical piece following the given style. In this way the creativity remains mostly with the composer. However the tool can give serendipitous results that complement a composer's creativity. We give musical examples to show what is possible. The final tool will be a plug-in for a Digital Audio Workstation. We hope that this work will lead to an improved use of symbolic AI for music composition.

Title: Securing location-based mobile computing
Speaker:
Bio: Panos Papadimitratos is a Professor at KTH, where he leads the Networked Systems Security (NSS) Group. Panos received his PhD from Cornell University, USA, and held positions at Virginia Tech, EPFL, and as a visiting researcher at Politecnico di Torino before joining KTH and founding the NSS Group. He is a Fellow of the IEEE (Class of 2020) and a Fellow of the Young Academy of Europe (YAE). His research focuses on the design and implementation of secure networked systems. At NSS, his research spans a broad range of topics in security and privacy, with a strong systems-oriented approach.
Abstract: A broad gamut of Internet of Things and mobile applications are location-based: their operation relies on precise position information, or they collect location-specific data. They have gained popularity, offering valuable services to users and systems. This brings forth a dual challenge: how to secure position information and how to safeguard the system from misbehaving data-collecting devices/users. In this talk, we discuss these two problems: securing Global Navigation Satellite System (GNSS) positioning and securing participatory location-based services. Time permitting, outlines of other NSS group activities will be discussed.

Title: Automated Decision-Making and Visualisation - How to use these in various fields of AI
Speaker: Anne Håkansson, Professor at the Division of Software and Computer Systems at KTH
Bio: Anne Håkansson is a professor of Computer Science with a specialization in Intelligent software systems, KTH, Sweden, and a full professor in Computer Science with a focus on AI, UiT, Norway. Her research interests lie in intelligent and multi-agent systems, in particular decision-making, reasoning strategies, and negotiation between systems and agents. Currently, she is conducting research on fully autonomous cyber-physical systems (robotics) with adaptiveness and robustness, focusing on trustworthy behavior in the transportation sector, as well as Industry 4.0 and social interactions. She is also working with healthcare, smart cities, and societies to develop automated smart products and services, including smart nudges, context-awareness, and the sustainability of smart technologies in various environments.
Abstract: Automated decision-making is a key aspect of modern AI, machine learning, and data analytics systems. It involves using algorithms to make decisions and predictions, often with minimal human intervention. This technology has a wide range of applications across various sectors. This talk presents how automated decision-making techniques and visualisation are applied across various fields of AI. It examines different examples and methodologies that demonstrate how decision-making processes and data visualisation can support making complex information more accessible and actionable. Through years of research, it will be demonstrated that combining these technologies enables better decisions, real-time insights, and enhanced communication across different robot teams, as well as between AI systems and stakeholders.

Title: Learning Without Labels Across Domains: From Mapping Wetlands to Breast Cancer Diagnosis
Speaker: Francisco Pena, Assistant Professor at the Division of Health Informatics and Logistics at KTH
Bio: Francisco is an Assistant Professor at the Division of Health Informatics and Logistics at KTH. His expertise is in computer vision and unsupervised machine learning, applied in diverse contexts ranging from digital pathology to remote sensing. Previously, he has held postdoctoral positions at Karolinska Institute, Stockholm University, KTH, and University College Dublin. He received his PhD in Computer Science from University College Cork.
Abstract: Annotated data is often the biggest bottleneck in applying deep learning to real-world problems, especially in domains like remote sensing and digital pathology. In this talk, I will present how self-supervised learning can help overcome this challenge by enabling powerful models to learn useful representations without the need for manual labels. I will showcase two case studies: one in wetland water detection using satellite images and another in digital pathology using whole-slide tissue images. In the first, we use a teacher–student model to automatically generate training data for segmenting water surfaces, removing the need for any hand-annotated images. In the second, we pretrain a large-scale self-supervised model on over 100 million pathology image tiles, allowing it to generalize across diverse diagnostic tasks with minimal supervision. Together, these examples highlight the potential of self-supervised learning to scale deep learning solutions in data-scarce environments across very different domains.

Title: Tools and Methods for Distributed and Large-Scale Training of Deep Neural Networks
Speaker: Sina Sheikholeslami, PhD Student in the Division of Software and Computer Systems (SCS) at KTH
Location: Sal-A at KTH Kista
Zoom: https://kth-se.zoom.us/j/69403203069
Bio: Sina is a PhD student in the Division of Software and Computer Systems (SCS) at KTH. Sina's work focuses on the intersection of distributed systems, machine learning/deep learning, and data-intensive computing, as well as the application of ML/DL across various domains. His primary research interest lies in developing simple yet practical solutions to complex challenges in machine learning, deep learning, and scalable computing.
Abstract: Deep Neural Networks (DNNs) have driven recent breakthroughs in Artificial Intelligence (AI) and are widely used in tasks such as satellite image analysis, medical diagnosis, and chatbot development. These advances have been enabled by distributed training, where data and computation are spread across multiple GPUs, overcoming the limitations of single-host training. However, distributed training introduces challenges such as communication overhead, straggler workers, and synchronization issues. Additionally, as DNN training involves multiple stages, reusing computational results, such as leveraging weights from hyperparameter tuning for model initialization, can improve efficiency. As models grow in complexity, understanding the impact of design choices becomes crucial. This work explores tools and methods to enhance distributed training by optimizing workload distribution and analyzing the contribution of different DNN components, improving efficiency and scalability in large-scale AI systems.

Title: Explanation Methods for Sequential Data Models - From Post-hoc to Interpretable-by-design Approaches for Time Series Classification
Speaker: Riccardo Guidotti, Associate professor at the University of Pisa
Bio: Riccardo received a PhD in Computer Science with a thesis on Personal Data Analytics from the University of Pisa. He is currently an Associate Professor at the Department of Computer Science at the University of Pisa, Italy, and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. He won the IBM fellowship program and has been an intern at IBM Research Dublin, Ireland, since 2015. His research interests are in personal data mining, clustering, explainable models, and analysis of transactional data.
Abstract: The increasing availability of high-dimensional time series data, such as electrocardiograms, stock indices, and motion sensors, has led to the widespread use of time series classifiers in critical fields like healthcare, finance, and transportation. However, the complexity of these models often makes them black boxes, hindering interpretability. In high-stakes domains, explaining a model’s decisions is vital for trust and accountability. Effective eXplainable AI (XAI) methods for sequential data are essential for providing insights and reinforcing expert decision-making. This presentation addresses the challenge of explaining sequential data models, focusing on time series classification. We begin by reviewing the current literature on XAI for time series classification. Then, we present a series of works that illustrate the transition from general-purpose post-hoc explanation approaches to interpretable-by-design methods. First, we introduce a local post-hoc agnostic subsequence-based time series explainer that can be used to elucidate the predictions of any time series classifier. Next, we demonstrate, through a real case study on car crash prediction, how insights from a post-hoc explainer were crucial in developing an effective interpretable-by-design method. Additionally, we showcase an interpretable subsequence-based classifier by enhancing SAX with dilation and stride to capture temporal patterns effectively. Finally, we explore the use of subsequence-based approaches in other sequential domains like mobility trajectories and text.

Title: Explainable Predictive Maintenance
Speaker: Sepideh Pashami, Associate Professor at Halmstad University, and Senior Researcher at RISE
Bio: Sepideh is an associate professor in Machine Learning at the Center for Applied Intelligent Systems Research, Halmstad University, and a senior researcher at RISE. She received her PhD degree from AASS Research Centre, Örebro University, Sweden, in 2016. Sepideh enrolled as the Technology Area Leader for Aware Intelligent Systems, part of the Intelligent Systems and Digital Design department, in 2020. Her research interests include predictive maintenance, interactive machine learning, causal inference, and representation learning. She has been involved as a researcher and research leader in several projects (e.g., FREEPORT, EVE, In4Uptime, ARISE, and HEALTH) together with Volvo Group AB, applying machine learning techniques for predictive maintenance of heavy-duty vehicles.
Abstract: Explainable Predictive Maintenance (PM) focuses on creating methods that can explain the operation of AI systems within the PM domain. What makes creating the maintenance plans challenging is incorporating the AI output into the human decision-making process and integrating it with human expertise. Today, in many industries, various AI systems, often black-box ones, predict failures based on analyzing sensor data. They discover symptoms of imminent issues by capturing anomalies and deviations from typical behavior, often with impressive accuracy. This talk will present some of the state-of-the-art methodologies in Explainable AI (XAI) relevant to predictive maintenance problems.

Title: Social Explainable AI: What is it and how to make it happen with CIU
Speaker: Kary Främling, Professor of Data Science at Umeå University
Bio: Kary's research focuses on Explainable Artificial Intelligence (XAI), particularly "outcome explanation," which involves explaining or motivating results, actions, or recommendations from any AI system. A key technique is the Contextual Importance and Utility (CIU) method, developed during his doctoral thesis (1991–1996). From 2000 to 2018, his research centered on intelligent products, the Internet of Things, digital twins, and systems-of-systems. His current XAI work remains closely connected to these areas, aiming to integrate AI into everyday life and products. To ensure AI remains "humane", it is crucial for systems to clearly communicate the reasons behind their actions in a way that is understandable to diverse users and relevant to real-world contexts.
Abstract: Explainable AI (XAI) methods make it possible to explain, or at least justify, recommendations and actions of AI systems in ways similar to humans. However, current XAI methods tend to produce non-interactive results that are mainly (or only) understandable to the AI engineers themselves. Social XAI (sXAI) is proposed as a name for XAI functionality that mimics how humans explain and justify their actions and opinions, at least to some degree. sXAI methods should be interactive and adapt their explanations to the explainee’s background knowledge, interests, and preferences in how to receive explanations, as well as the pace of interaction. The presentation shows how sXAI can be implemented using the Contextual Importance and Utility (CIU) method – and gives some reasons for why we probably won’t see sXAI happening anytime soon.