KTH Logo

NetConfEval accepted at CoNEXT 2024

Can Large Language Models facilitate network configuration? In our recently accepted CoNEXT 2024 paper, we investigate the opportunities and challenges in operating network systems using recent LLM models.

We devise a benchmark for evaluating the capabilities of different LLM models on a variety of networking tasks and show different ways of integrating such models within existing systems. Our results show that different models works better in different tasks. Translating high-level human-language requirements into formal specifications (e.g., API function calling) can be done with small models. However, generating code that controls network systems is only doable with larger LLMs, such as GPT4.

This is a first fundamental first step in our SEMLA project looking at ways to integrate LLMs into system development.

GitHub code: link

Hugging Face: link

Paper PDF: link

 

Best paper at ACM CoNEXT 2023

We are hugely honored that our “Millions of Low-Latency Insertions on ASIC switches” paper received the Best Paper Award at ACM CoNEXT 2023! More details are available in our earlier post.

From left to right: Tommaso Caiazzi, Mariano Scazzariello, Marco Chiesa, Olivier Bonaventure (TPC co-chair)

SEMLA: New Vinnova-funded project on LLMs for cybersecurity

Our “SEMLA: Securing Enterprises via Machine-Learning-based Automation” project proposal has been selected for funding by Vinnova. The project cost is 12MSEK with Prof. Marco Chiesa as the PI. Other project partners include members from the Computer Security group from KTH,  the Connected Intelligence unit at RISE, RedHat, and Saab. 

The SEMLA project seeks to make the development of software systems more resilient, secure, and cost-effective. SEMLA leverages recent advancements in machine learning (ML) and artificial intelligence (AI) to automate critical yet common & time-consuming tasks in software development that often lead to catastrophic security vulnerabilities.

Switcharoo accepted at CoNEXT 2023

Today’s network functions require keeping state at the granularity of each individual flow. Storing such state on network devices is highly challenging due to the complexity of the involved data structures. As such, the state is often stored on inefficient CPU-based servers as opposed to high-speed ASIC network switches. In our newly accepted CoNEXT paper, we demonstrate the possibility to perform tens of millions of low-latency flow state insertions on ASIC switches, showing our implementation achieves 75x memory requirements compared to existing probabilistic data structures in a common datacenter scenario. A PDF of the paper will soon be available. This was joint work between Mariano Scazzariello, Tommaso Caiazzi (from Roma Tre University), and Marco Chiesa.

PipeCache accepted at SIGMETRICS 2023

Similarly to multi-core CPUs, also network devices increasingly rely on parallel packet processing engines to achieve insanely high throughput (up to 16 pipes to process 50 terabits per second on a single chip). In our recent paper accepted at ACM SIGMETRICS, we unveil, quantify, and mitigate the impact of deploying existing network monitoring mechanisms on multi-pipe network devices. Our design, called PipeCache, allows to reduce memory requirements (a constrained resource on ASIC devices) up to 16x! A PDF of the paper is available here. Code is available here.