🤖 AI Summary
Real-time LLM inference faces challenges including difficulty in dynamic precision adaptation, underutilization of hardware FP8 capabilities, and high model storage overhead. Method: We propose Nested Precision Representation (NPR), which losslessly decomposes each FP16 weight into two FP8 components—enabling seamless runtime switching between FP8 and FP16 execution without additional memory footprint. Our approach integrates a customized CUTLASS GEMM kernel, vLLM integration, online FP16 reconstruction, and hierarchical precision scheduling. Results: Experiments show that FP8 mode achieves 1.55× higher throughput with negligible accuracy degradation; FP16 mode incurs only 3.9% average performance overhead while significantly improving SLO compliance and resource utilization. To our knowledge, this is the first work enabling dual-precision adaptive inference without increasing model size, establishing a new paradigm for efficient LLM serving.
📝 Abstract
Large Language Models (LLMs) are playing a crucial role in latency-critical, high-throughput services like virtual assistants and code generation. While techniques such as continuous batching and paged attention address service-level objectives (SLOs), and quantization methods accelerate inference, the dynamic and efficient adaptation of precision at runtime remains a significant, largely underexplored challenge. The emergence of hardware support for FP8 arithmetic, offering up to 2x the throughput of FP16, presents an attractive opportunity for interactive LLM serving. However, current approaches like co-deploying FP8 and FP16 models suffer from increased storage overhead and fail to unlock FP8's full potential. To address these limitations, we introduce NestedFP, a novel precision-adaptive serving technique enabling seamless FP8 and FP16 inference from a single 16-bit model representation, thereby incurring no additional memory cost. NestedFP decomposes each FP16 weight into two 8-bit components, facilitating efficient FP8 execution while preserving full FP16 accuracy. We demonstrate the practical viability of our approach by implementing a custom CUTLASS-based GEMM kernel that reconstructs FP16 operands on-the-fly, integrated within the vLLM serving framework. Our evaluation shows that NestedFP delivers up to 1.55x throughput improvement in FP8 mode with negligible accuracy degradation compared to FP16 precision, while introducing only 3.9% performance overhead on average in FP16 mode across various models. NestedFP thus provides a flexible foundation for dynamic, SLO-aware precision selection, paving the way for more scalable and efficient LLM serving under bursty and heterogeneous workloads.