This paper shows that self-supervised pretraining on unlabeled data enables accurate prediction even when labeled data is scarce, which is the normal situation in live telecom networks. It evaluates how one pretrained model can be efficiently adapted to multiple downstream tasks that share similar feature structure, as new applications are introduced. This modular reuse across a growing set of tasks improves model lifecycle management with fewer labels needed, less compute, and less need to train and maintain many separate task-specific models from scratch. The work also releases a dataset to support research on realistic telecom network scenarios. The work has been accepted at the journal on IEEE Transactions on Network and Service Management (TNSM).
Authors: Akhila Rao; Magnus Boman
Paper link: https://ieeexplore.ieee.org/document/11208832