Publication
HotStorage 2024
Conference paper

Dictionary Based Cache Line Compression

View publication

Abstract

Active-standby mechanisms for VM high-availability demand frequent synchronization of memory and CPU state, involving the identification and transfer of "dirty" memory pages to a standby target. Building upon the granularity offered by CXL-enabled memory devices, as discussed by Waddington et al., this paper proposes a dictionary-based compression method operating on 64-byte cache lines to minimize snapshot volume and synchronization latency. The method aims to transmit only necessary information required to reconstruct the snapshot at the standby machine, augmented by byte grouping and cache-line partitioning techniques. We assess the compression benefits on memory access patterns across 20 benchmarks snapshots and compare our approach to standard off-the-shelf compression methods. Our findings reveal significant improvements across nearly all benchmarks, with some experiencing over a twofold enhancement compared to standard compression, while others show more moderate gains. We conduct an in-depth experimental analysis on the contribution of each method and examine the nature of the benchmarks. We ascertain that the repeated nature of cache lines across snapshots and their concise representation predominantly drive the size reduction, accounting for 92% of the observed improvement. Our work paves the way for further reduction in the data transferred to standby machines, thereby enhancing VM high-availability and reducing synchronization latency substantially.