RServe: Overlapping Encoding and Prefill for Efficient LMM Inference

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large multimodal inference systems tightly couple visual encoders with language models, leading to severe resource interference, high data dependency, and poor support for both intra-request and inter-request parallelism. This work proposes REDServe—the first large-scale multimodal inference system enabling fine-grained decoupled scheduling. REDServe fully separates the encoding module from the language model and introduces three key techniques: (1) overlapped multimodal encoding and LLM forward computation, (2) a schedulable token mechanism, and (3) a chunked prefill strategy—collectively enabling coordinated two-level pipelining across and within requests. Experiments demonstrate that REDServe reduces average latency by 66% and increases throughput by 109% over state-of-the-art systems, while significantly improving resource utilization and load balancing.

Technology Category

Application Category

📝 Abstract
Large multimodal models (LMMs) typically employ an encoding module to transform multimodal data inputs into embeddings, which are then fed to language models for further processing. However, efficiently serving LMMs remains highly challenging due to the inherent complexity of their inference pipelines. Traditional serving engines co-locate the encoding module and the language model, leading to significant resource interference and tight data dependency. Recent studies have alleviated this issue by disaggregating the encoding module from the model, following a design style of prefill-decode disaggregation. Nevertheless, these approaches fail to fully exploit parallelism both within individual requests (intra-request) and across multiple requests (inter-request). To overcome the limitation, we propose REDServe, an LMM inference system that efficiently orchestrates intra- and inter-request pipelines. REDServe is designed to reduce low latency and maximize parallelism at both intra- and inter-request granularities. Built on the disaggregated architecture of the encoding module and language model, REDServe adopts a fine-grained scheduling method that overlaps multimodal encoding with the forward computation of the language model within a single request. For inter-request pipeline, REDServe leverages schedulable tokens and token budgets to balance computational loads across micro-batches. Combined with chunked prefill, this enables a novel scheduling strategy that coordinates the execution of intra- and inter-request pipelines. Experimental evaluations on representative LMMs show that REDServe achieves substantial latency reduction of up to 66% while improving throughput by up to 109%, significantly outperforming existing serving approaches.
Problem

Research questions and friction points this paper is trying to address.

Overcoming resource interference in multimodal model inference
Enhancing intra-request parallelism between encoding and language models
Optimizing inter-request scheduling across multiple inference requests
Innovation

Methods, ideas, or system contributions that make the work stand out.

Overlaps multimodal encoding with language model computation
Uses schedulable tokens to balance computational loads
Employs chunked prefill for coordinated intra-inter request scheduling
🔎 Similar Papers
No similar papers found.
T
Tianyu Guo
CSE, Sun Yat-sen University
T
Tianming Xu
Rednote
X
Xianjie Chen
CSE, Sun Yat-sen University
J
Junru Chen
CSE, Sun Yat-sen University
N
Nong Xiao
CSE, Sun Yat-sen University
Xianwei Zhang
Xianwei Zhang
Sun Yat-sen U.; AMD Research/RTG
Architecture/SystemCompilationGPU/MemoryHPCSimulation/Modeling