Skip to main content

Belief-aided Robust Control for RET Optimization

Examiner: Elling W. Jacobsen

Time: Thu 2021-06-03 13.00 - 13.45

Location: Zoom link:

Respondent: Jack Jönsson

Opponent: Albin Larsson Forsberg

Supervisor: Alexandre Proutière

Abstract: Remote Electrical Tilt (RET) is a method for configuring antenna 
downtilt in base stations to optimize mobile network performance. 
Reinforcement Learning (RL) is an approach to automating the process by 
letting an agent learn an optimal control strategy and adapt to the dynamic 
environment. Applying RL in real world comes with challenges, for the RET 
problem there are performance requirements and partial observability of the 
system through exogenous factors inducing noise in observations.

This thesis proposes a solution method through modeling the problem by a Partially 
Observable Markov Decision Process (POMDP). The set of hidden states are modeled 
as a high-level representation of situations requiring one of the possible actions 
uptilt, downtilt, no change. From this model, a Bayesian Neural Network (BNN) 
is trained to predict an observation model, relating observed Key Performance 
Indicators (KPIs) to the hidden states. 

The observation model is used for estimating belief state probabilities of each hidden 
state, from which decision of control action is made through a restrictive threshold policy. 
Experiments comparing the method to a baseline Deep Q-network (DQN) agent shows the method 
able to reach the same average performance increase as the baseline while outperforming the 
baseline in two metrics important for robust and safe control behaviour, the worst-case 
minimum reward increase and the average reward increase per number of tilt actions.
Page responsible:Web editors at EECS
Belongs to: Decision and Control Systems
Last changed: May 25, 2021