Skip to content

NetConfEval accepted at CoNEXT 2024

Can Large Language Models facilitate network configuration? In our recently accepted CoNEXT 2024 paper, we investigate the opportunities and challenges in operating network systems using recent LLM models.

We devise a benchmark for evaluating the capabilities of different LLM models on a variety of networking tasks and show different ways of integrating such models within existing systems. Our results show that different models works better in different tasks. Translating high-level human-language requirements into formal specifications (e.g., API function calling) can be done with small models. However, generating code that controls network systems is only doable with larger LLMs, such as GPT4.

This is a first fundamental first step in our SEMLA project looking at ways to integrate LLMs into system development.

GitHub code: link

Hugging Face: link

Paper PDF: link

 

Two master theses in progress

Two exceptional students have been hired through a selective process within the SEMLA process: Laura Puccioni and Daniele Cipollone. Laura is working on understanding how to compress models for code generation while Daniele is focusing on detecting vulnerabilities using LLMs. We look forward to seeing the outcome of the two theses!

Talk at 5G Italy

We presented results from the SEMLA project both in a presentation and as part of a panel discussion within the 5G Italy event in Rome (https://www.5gitaly.eu/). Many interesting discussions and follow-ups!

Talk at Global Connect

We presented parts of the current results from the SEMLA project both in a presentation and as part of a panel discussion within the Global Connect event in Paris. Many interesting discussions and follow-ups!

NetBuddy paper

We published our preliminary work at the core of SEMLA on arXIV.

 

Abstract. This paper explores opportunities to utilize Large Language Models (LLMs) to make network configuration human-friendly, simplifying the configuration of network devices and minimizing errors. We examine the effectiveness of these models in translating high-level policies and requirements (i.e., specified in natural language) into low-level network APIs, which requires understanding the hardware and protocols. More specifically, we propose NETBUDDY for generating network configurations from scratch and modifying them at runtime. NETBUDDY splits the generation of network configurations into fine-grained steps and relies on self-healing code-generation approaches to better take advantage of the full potential of LLMs. We first thoroughly examine the challenges of using these models to produce a fully functional & correct configuration, and then evaluate the feasibility of realizing NETBUDDY by building a proof-of-concept solution using GPT-4 to translate a set of high-level requirements into P4 and BGP configurations and run them using the Kathará network emulator.

https://arxiv.org/pdf/2309.06342.pdf