Reasoning Planning for Language Models

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing language model inference path selection relies on redundant candidate generation and neglects computational cost. Method: This paper proposes EPIC—a framework that employs contrastive learning to construct a unified representation space, jointly modeling accuracy and inference overhead for query-adaptive optimal path selection. Theoretically, it provides the first precision analysis of candidate aggregation methods and derives an error bound, enabling a probabilistic-bound regularization mechanism that significantly reduces generation overhead while preserving accuracy. Methodologically, EPIC integrates ensemble-based planning, shared representation learning, and utility-driven optimization. Results: On multiple mathematical reasoning benchmarks, EPIC achieves higher and more stable accuracy with substantially lower computational resources, empirically validating both the optimizability of inference decisions and the effectiveness of co-designing efficiency and accuracy.

Technology Category

Application Category

📝 Abstract
Selecting an appropriate reasoning method for a given query remains a key challenge in language model generation. Existing approaches typically generate multiple candidate responses and use an aggregation strategy to select the output answer, often assuming that more candidate answers yield higher accuracy. We revisit this assumption through a rigorous theoretical analysis, deriving accuracy bounds for standard aggregation methods under fixed generation distributions and candidate sizes. Building on these insights, we introduce EPIC, an Ensemble Planning with Contrastive learning framework to learn a shared representation space that captures both model reasoning abilities and query-method compatibility. EPIC incorporates our probability bounds as a regularizer in a utility-driven optimization that balances accuracy and computational cost. Experiments on diverse mathematical reasoning tasks show that EPIC consistently selects optimal reasoning methods, improving accuracy while reducing computational overhead. Our code can be found at https://github.com/nguyenngocbaocmt02/EPIC.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal reasoning methods for language model queries
Improving accuracy while reducing computational costs
Learning query-method compatibility through contrastive representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

EPIC framework uses contrastive learning for shared representation
Probability bounds regularize utility-driven optimization balancing cost
Selects optimal reasoning methods improving accuracy reducing computation
🔎 Similar Papers
No similar papers found.