Skip to content

Paper on self-supervised learning for telecom networks accepted

This paper shows that self-supervised pretraining on unlabeled data enables accurate prediction even when labeled data is scarce, which is the normal situation in live telecom networks. It evaluates how one pretrained model can be efficiently adapted to multiple downstream tasks that share similar feature structure, as new applications are introduced. This modular reuse across a growing set of tasks improves model lifecycle management with fewer labels needed, less compute, and less need to train and maintain many separate task-specific models from scratch. The work also releases a dataset to support research on realistic telecom network scenarios. The work has been accepted at the journal on IEEE Transactions on Network and Service Management (TNSM).
Authors: Akhila Rao; Magnus Boman

HOL4P4 presented at the P4 Developer Days

As part of the P4 Developer Days series of live educational webinars, Didrik Lundberg presented an overview of the HOL4P4 formalisation and some of the tools that are built on top of it. The presentation was recorded and is now available online.

YouTube: https://www.youtube.com/watch?v=ZkOKQ-e97YQ
p4.org: https://p4.org/event/p4-developer-days-from-semantics-to-software-building-a-verification-ecosystem-for-p4-using-hol4p4/

Three papers accepted at LLM4Code!

Our work on leveraging LLMs for 1) generating verifiable code, 2) discovering software vulnerabilities, and 3) reducing the size of code-generation transformer-based models have been accepted for publication at the LLM4Code workshop!

  • From Scientific Texts to Verifiable Code: Automating the Process with Transformers
    In the International Workshop on Large Language Models for Code (LLM4Code), published within ICSE, 2025.
    C. Wang, M. Scazzariello, M. Chiesa
    [arXiv] [ Demo video]
  • Automating the Detection of Code Vulnerabilities by Analyzing GitHub Issues
    In the International Workshop on Large Language Models for Code (LLM4Code), published within ICSE, 2025.
    D. Cipollone, C. Wang, M. Scazzariello,, S. Ferlin, M. Izadi D. Kostić, M. Chiesa
    [arXiv]
  • Deriving Coding-Specific Sub-Models from LLMs using Resource-Efficient Pruning
    In the International Workshop on Large Language Models for Code (LLM4Code), published within ICSE, 2025.
    L. Puccioni, A. Farshin, M. Scazzariello, C. Wang, M. Chiesa, D. Kostić
    [arXiv]