🤖 AI Summary
Existing CXL offloading mechanisms lack fine-grained support for protocol-agnostic design trade-offs, hindering adaptation to heterogeneous data and computational requirements—and thus limiting end-to-end performance and resource efficiency in disaggregated memory systems. To address this, we propose Asynchronous Reflow, a novel protocol for CXL-based in-memory computing that decouples data and control flows, enabling asynchronous, lightweight pipelined communication between the host and compute-capable CXL memory (CCM). Furthermore, we design the hierarchical KAI system to jointly optimize near-memory computation and task offloading. Experimental evaluation demonstrates that our approach reduces end-to-end execution time by up to 50.4%, while decreasing average idle time on CCM and the host by 22.11× and 3.85×, respectively—significantly improving system utilization and latency.
📝 Abstract
CXL-based Computational Memory (CCM) enables near-memory processing within expanded remote memory, presenting opportunities to address data movement costs associated with disaggregated memory systems and to accelerate overall performance. However, existing operation offloading mechanisms are not capable of leveraging the trade-offs of different models based on different CXL protocols. This work first examines these tradeoffs and demonstrates their impact on end-to-end performance and system efficiency for workloads with diverse data and processing requirements. We propose a novel 'Asynchronous Back-Streaming' protocol by carefully layering data and control transfer operations on top of the underlying CXL protocols. We design KAI, a system that realizes the asynchronous back-streaming model that supports asynchronous data movement and lightweight pipelining in host-CCM interactions. Overall, KAI reduces end-to-end runtime by up to 50.4%, and CCM and host idle times by average 22.11x and 3.85x, respectively.