Skip to main content
To KTH's start page To KTH's start page

End-to-end performance prediction and automated resource management of cloud services

Time: Mon 2024-06-10 10.00

Location: Q2, Malvinas väg 10, Stockholm

Language: English

Subject area: Electrical Engineering

Doctoral student: Forough Shahabsamani , Nätverk och systemteknik

Opponent: Rafael Pasquini, UNIVERSIDADE FEDERAL DE UBERLÂNDIA

Supervisor: Prof. Rolf Stadler, Nätverk och systemteknik

Export to calendar

Abstract

Cloud-based services are integral to modern life. Cloud systems aim to provide customers with uninterrupted services of high quality while enabling cost-effective fulfillment by providers. The key to meet quality requirements and end-to-end performance objectives is to devise effective strategies to allocate resources to the services. This in turn requires automation of resource allocation. Recently, researchers have studied learning-based approaches, especially reinforcement learning (RL) for automated resource allocation. These approaches are particularly promising to perform resource allocation in cloud systems as they allow to deal with the architectural complexity of a cloud environment. Previous research shows that reinforcement learning is effective for specific types of controls, such as horizontal or vertical scaling of compute resources. Major obstacles for operational deployment remain however. Chief among them is the fact that reinforcement learning methods require long times for training and retraining after system changes. 

With this thesis, we aim to overcome these obstacles and demonstrate dynamic resource allocation using reinforcement learning on a testbed. On the conceptual level, we address two interconnected problems: predicting end-to-end service performance and automated resource allocation for cloud services. First, we study methods to predict the conditional density of service metrics and demonstrate the effectiveness of employing dimensionality reduction methods to reduce monitoring, communication, and model-training overhead. For automated resource allocation, we develop a framework for RL-based control. Our approach involves learning a system model from measurements, using a simulator to learn resource allocation policies, and adapting these policies online using a rollout mechanism. Experimental results from our testbed show that using our framework, we can effectively achieve end-to-end performance objectives by dynamically allocating resources to the services using different types of control actions simultaneously.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-346585