Skip to main content

Paper: Reexamining Direct Cache Access to Optimize I/O Intensive Applications...

Conference paper in USENIX ATC'20

Published Jul 23, 2020

Abstract

Memory access is the major bottleneck in realizing multi-hundred-gigabit networks with commodity hardware, hence it is essential to make good use of cache memory that is a faster, but smaller memory closer to the processor. Our goal is to study the impact of cache management on the performance of I/O intensive applications. Specifically, this paper looks at one of the bottlenecks in packet processing, i.e., direct cache access (DCA). We systematically studied the current implementation of DCA in Intel processors, particularly Data Direct I/O technology (DDIO), which directly transfers data between I/O devices and the processor's cache. Our empirical study enables system designers/developers to optimize DDIO-enabled systems for I/O intensive applications. We demonstrate that optimizing DDIO could reduce the latency of I/O intensive network functions running at 100 Gbps by up to ~30%. Moreover, we show that DDIO causes a 30% increase in tail latencies when processing packets at 200 Gbps, hence it is crucial to selectively inject data into the cache or to explicitly bypass it.

The paper and presentation video can be found on USENIX ATC'20 webpage .

A preliminary version of this paper has been presened in the EuroSys'20 poster session . The poster is available on DiVA .
 

Page responsible:Voravit Tanyingyong
Belongs to: Communication Systems
Last changed: Jul 23, 2020