🤖 AI Summary
To address the challenge of jointly optimizing inference cost, latency, and reliability in large language model (LLM)-based knowledge systems for telecommunications, this paper proposes a three-tier collaborative architecture—edge, cloud, and expert—employing dynamic query routing guided by dual tests: knowledge consistency verification and LLM confidence scoring. We innovatively formulate the routing decision as a multiple hypothesis testing (MHT) problem, enabling statistically rigorous threshold selection under finite samples to bound misalignment risk—a first-of-its-kind verifiable reliability guarantee for LLM cascades. Furthermore, we integrate confidence calibration with lightweight edge inference to enhance resource efficiency. Evaluated on the TeleQnA benchmark, our approach achieves superior cost-effectiveness and expert alignment at a prescribed statistical confidence level, outperforming conventional cascade baselines.
📝 Abstract
Large language models (LLMs) are emerging as key enablers of automation in domains such as telecommunications, assisting with tasks including troubleshooting, standards interpretation, and network optimization. However, their deployment in practice must balance inference cost, latency, and reliability. In this work, we study an edge-cloud-expert cascaded LLM-based knowledge system that supports decision-making through a question-and-answer pipeline. In it, an efficient edge model handles routine queries, a more capable cloud model addresses complex cases, and human experts are involved only when necessary. We define a misalignment-cost constrained optimization problem, aiming to minimize average processing cost, while guaranteeing alignment of automated answers with expert judgments. We propose a statistically rigorous threshold selection method based on multiple hypothesis testing (MHT) for a query processing mechanism based on knowledge and confidence tests. The approach provides finite-sample guarantees on misalignment risk. Experiments on the TeleQnA dataset -- a telecom-specific benchmark -- demonstrate that the proposed method achieves superior cost-efficiency compared to conventional cascaded baselines, while ensuring reliability at prescribed confidence levels.