KTH Logo

Best Paper Award at SOSP 2021 for our paper “LineFS: Efficient SmartNIC Offload of a Distributed File System with Pipeline Parallelism”

We are very happy to announce that our LineFS paper was among the three papers that won the Best Paper Award at SOSP 2021!

LineFS builds upon our previous work on Assise [OSDI ’20] by offloading CPU-intensive tasks to a SmartNIC (BlueField-1 in our case) for about 80% performance improvement across the board.

Jongyul’s presentation is already available:

This is joint work with

Jongyul Kim (KAIST), Insu Jang (University of Michigan), Waleed Reda (KTH Royal Institute of Technology / Université catholique de Louvain), Jaeseong Im (KAIST), Marco Canini (KAUST), Dejan Kostić (KTH Royal Institute of Technology), Youngjin Kwon (KAIST), Simon Peter (The University of Texas at Austin), and Emmett Witchel (The University of Texas at Austin / Katana Graph).

 

Full abstract is as follows:

In multi-tenant systems, the CPU overhead of distributed file systems (DFSes) is increasingly a burden to application performance. CPU and memory interference cause degraded and unstable application and storage performance, in particular for operation latency. Recent client-local DFSes for persistent memory (PM) accelerate this trend. DFS offload to SmartNICs is a promising solution to these problems, but it is challenging to fit the complex demands of a DFS onto simple SmartNIC processors located across PCIe.

We present LineFS, a SmartNIC-offloaded, high-performance DFS with support for client-local PM. To fully leverage the SmartNIC architecture, we decompose DFS operations into execution stages that can be offloaded to a parallel data-path execution pipeline on the SmartNIC. LineFS offloads CPU-intensive DFS tasks, like replication, compression, data publication, index and consistency management to a SmartNIC.

We implement LineFS on the Mellanox BlueField SmartNIC and compare it to Assise, a state-of-the-art PM DFS. LineFS improves latency in LevelDB up to 80% and throughput in Filebench up to 79%, while providing extended DFS availability during host system failures.

Hamid Ghasemirahni’s Licentiate Defense

We are happy to announce that Hamid Ghasemirahni successfully defended his licentiate thesis (licentiate is a degree at KTH half-way to a PhD)! Marco Chiesa  has done an excellent job as a co-advisor and we are once again very grateful to Prof. Gerald Q. Maguire Jr. for his key insights. Prof. Al Davis was a superb opponent at the licentiate seminar. Hamid’s thesis (hopefully one of many to come in this project) is available online:

Packet Order Matters!: Improving Application Performance by Deliberately Delaying Packets

We couldn’t take the obligatory hallway shot, so we faked the gift giving over Zoom!

 

 

Our PAM 2021 paper: “What you need to know about (Smart) Network Interface Cards”

In our PAM 2021 paper, we study the performance of (smart) Network Interface Cards (NICs) for widely deployed packet classification operations, focusing on four 100-200 GbE NICs from one of the largest NIC vendors worldwide.

We show that the forwarding throughput of the tested NICs sharply degrades when i) the forwarding plane is updated and ii) packets match multiple forwarding tables in the NIC.

Moreover, we uncover that the standard DPDK rule update API realizes slow & non-atomic rule updates using a sequence of rule insertion and deletion operations.

We solve this problem by introducing a direct in-memory rule update mechanism that achieves 80% higher throughput than the standard DPDK rule update API.

This is joint work with Georgios P. Katsikas, Tom Barbette, Marco Chiesa, Dejan Kostic, and Gerald Q. Maguire Jr.

Our ASPLOS ’21 Paper: “PacketMill: Toward Per-Core 100-Gbps Networking”

ASPLOS ’21 will feature Alireza’s presentation of our paper titled “PacketMill: Toward Per-Core 100-Gbps Networking”. This is joint work with Alireza Farshin, Tom Barbette, Amir Roozbeh, Gerald Q. Maguire Jr., and Dejan Kostić.

The full abstract (with the video and more resources below):

We present PacketMill , a system for optimizing software packet processing, which (i) introduces a new model to effjciently manage packet metadata and (ii) employs code-optimization techniques to better utilize commodity hardware. PacketMill grinds the whole packet processing stack, from the high-level network function confjguration fjle to the low-level userspace network (specifjcally DPDK) drivers, to mitigate ineffjciencies and produce a customized binary for a given network function. Our evaluation results show that PacketMill increases throughput (up to 36.4Gbps – 70%) & reduces latency (up to 101µs – 28%) and enables nontrivial packet processing (e.g., router) at ≈100Gbps , when new packets arrive > 10 × faster than main memory access times, while using only one processing core

https://play.kth.se/media/PacketMillA+Toward+Per-Core+100-Gbps+Networking/0_7rvtusfo

PacketMill Webpage: https://packetmill.io/

PacketMill Paper: https://packetmill.io/docs/packetmill-asplos21.pdf
PacketMill source code: https://github.com/aliireza/packetmill
PacketMill Slides with English transcripts: https://people.kth.se/~farshin/documents/packetmill-asplos21-slides.pdf