Explainable Artificial Intelligence for Telecommunications
Time: Fri 2026-02-06 09.30
Location: F3 (Flodis), Lindstedtsvägen 26 & 28, Stockholm
Language: English
Subject area: Machine Design
Doctoral student: Ahmad Terra , Mekatronik och inbyggda styrsystem
Opponent: Professor Kary Främling, Umeå Universitet
Supervisor: Professor Martin Törngren, Mekatronik och inbyggda styrsystem; Adjunkt Professor Rafia Inam, Mekatronik och inbyggda styrsystem; Adjunkt Professor Elena Fersman, Mekatronik och inbyggda styrsystem
Abstract
Artificial Intelligence (AI) is a key driver of technological development in many industrial sectors. It is being embedded into many components of telecommunications networks to optimize their functionality in various ways. AI technologies are advancing rapidly, with increasingly sophisticated techniques being introduced. Therefore, understanding how an AI model operates and arrives at its output is crucial to ensure the integrity of the overall system. One way to achieve this is by applying Explainable Artificial Intelligence (XAI) techniques to generate information about the operation of an AI model. This thesis develops and evaluates XAI techniques to improve the transparency of AI models.
In supervised learning, several XAI methods that compute feature importance were applied to identify the root cause of network operation issues. Their characteristics were compared and analyzed for local, cohort, and global scopes. However, the generated attributive explanations do not provide actionable insight to resolve the underlying issue. Therefore, another type of explanation, namely counterfactual, was explored during the study. This type of explanation indicates the changes necessary to obtain a different result. Counterfactual explanations were utilized to prevent potential issues such as Service Level Agreement (SLA) violations from occurring. This method was shown to significantly reduce SLA violations in an emulated network, but requires explanation-to-action conversion.
Unlike the previous method, a Reinforcement Learning (RL) agent can perform an action in its environment to achieve its goal, eliminating the need for explanation-to-action conversion. Therefore, understanding its behavior becomes important, especially when it controls a critical infrastructure. In this thesis, two state-of-the-art Explainable Reinforcement Learning (XRL) methods, namely reward decomposition and Autonomous Policy Explanation (APE), were investigated and implemented to generate explanations for different users, technical and non-technical, respectively. While the reward decomposition explains the output of a model and the feature attribution explains the input, the connection between them was missing in the literature. In this thesis, the combination of feature importance and reward decomposition methods was proposed to generate detailed explanations as well as to identify and mitigate bias in the AI models. In addition, a detailed contrastive explanation can be generated to explain why an action is preferred over another. For non-technical users, APE was integrated with the attribution method to generate explanations for a certain condition. APE was also integrated with a counterfactual method to generate a meaningful explanation. However, APE has a limitation in scaling up with the number of predicates. Therefore, an alternative textual explainer, namely Clustering-Based Summarizer (CBS), was proposed to address this limitation. The evaluation of textual explanations is limited in the literature. Therefore, a rule extraction technique was proposed to evaluate textual explanations based on their characteristics, fidelity, and performance. In addition, two refinement techniques were proposed to improve the F1 score and reduce the number of duplicate conditions.
In summary, this thesis has developed the following contributions: a) implementation and analysis of different XAI methods; b) methods to utilize explanations and explainers; c) evaluation methods for AI explanations; and d) methods to improve explanation quality. This thesis revolves around network automation in the telecommunications field. The explainability methods for supervised learning were applied to a network slice assurance use case, and for reinforcement learning, it was applied to a network optimization use case (namely, Remote Electrical Tilt (RET)). In addition, applications in other open-source environments were also presented, showing broader applications in different use cases.