A Survey on Inference Engines for Large Language Models: Perspectives on Optimization and Efficiency

📅 2025-05-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of selecting and designing large language model (LLM) inference engines—namely, the lack of systematic evaluation guidance—this paper introduces the first multidimensional unified benchmarking framework. We systematically evaluate 25 mainstream open-source and commercial LLM inference engines across five dimensions: usability, deployability, generality, scalability, and throughput/latency adaptability, while performing root-cause analysis of their supported optimization techniques, ecosystem maturity, and commercial cost strategies. Key contributions include: (1) identifying future research directions—namely, support for complex service orchestration, heterogeneous hardware acceleration, and security-enhanced inference; (2) releasing and maintaining an open, community-driven knowledge repository (Awesome-LLM-Inference-Engine); and (3) delivering a comprehensive assessment report covering design trade-offs, application scenarios, and practical deployment guidelines—providing empirically grounded foundations for both industrial engine selection and academic R&D.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are widely applied in chatbots, code generators, and search engines. Workloads such as chain-of-thought, complex reasoning, and agent services significantly increase the inference cost by invoking the model repeatedly. Optimization methods such as parallelism, compression, and caching have been adopted to reduce costs, but the diverse service requirements make it hard to select the right method. Recently, specialized LLM inference engines have emerged as a key component for integrating the optimization methods into service-oriented infrastructures. However, a systematic study on inference engines is still lacking. This paper provides a comprehensive evaluation of 25 open-source and commercial inference engines. We examine each inference engine in terms of ease-of-use, ease-of-deployment, general-purpose support, scalability, and suitability for throughput- and latency-aware computation. Furthermore, we explore the design goals of each inference engine by investigating the optimization techniques it supports. In addition, we assess the ecosystem maturity of open source inference engines and handle the performance and cost policy of commercial solutions. We outline future research directions that include support for complex LLM-based services, support of various hardware, and enhanced security, offering practical guidance to researchers and developers in selecting and designing optimized LLM inference engines. We also provide a public repository to continually track developments in this fast-evolving field: https://github.com/sihyeong/Awesome-LLM-Inference-Engine
Problem

Research questions and friction points this paper is trying to address.

Optimizing diverse LLM inference costs for varying service demands
Evaluating 25 inference engines on performance, scalability, and deployment ease
Addressing gaps in systematic study and future LLM inference research directions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive evaluation of 25 inference engines
Integration of parallelism, compression, and caching
Public repository for tracking developments
🔎 Similar Papers
No similar papers found.
S
Sihyeong Park
Korea Electronics Technology Institute, Seongnam-si, Gyeonggi-do, South Korea
S
Sungryeol Jeon
Korea Electronics Technology Institute, Seongnam-si, Gyeonggi-do, South Korea
C
Chaelyn Lee
Korea Electronics Technology Institute, Seongnam-si, Gyeonggi-do, South Korea
S
Seokhun Jeon
Korea Electronics Technology Institute, Seongnam-si, Gyeonggi-do, South Korea
B
Byung-Soo Kim
Korea Electronics Technology Institute, Seongnam-si, Gyeonggi-do, South Korea
Jemin Lee
Jemin Lee
Associate Professor, Yonsei University
Wireless CommunicationsWireless SecurityIoT5G