Optimal Stopping vs Best-of-$N$ for Inference Time Optimization

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face a trade-off between generation quality and inference cost during repeated sampling. Method: This paper proposes the first inference-time adaptive stopping framework grounded in optimal stopping theory. Inspired by the Pandora’s box problem, it models sequential token generation as opening costly stochastic reward boxes. It introduces a UCB-style online algorithm integrated with the Bradley–Terry transformation to enable cross-prompt reward normalization and dynamic threshold learning—without requiring prior distribution assumptions. Theoretical analysis unifies Weitzman’s optimal search strategy with UCB convergence guarantees. Results: Evaluated on AlpacaFarm and HH-RLHF with diverse LLM–reward model combinations, the method matches Best-of-N performance while reducing average generation steps by 15%–35%, significantly improving inference efficiency.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) generation often requires balancing output quality against inference cost, especially when using multiple generations. We introduce a new framework for inference-time optimization based on the classical Pandora's Box problem. Viewing each generation as opening a costly "box" with random reward, we develop algorithms that decide when to stop generating without knowing the underlying reward distribution. Our first contribution is a UCB-style Pandora's Box algorithm, which achieves performance that is provably close to Weitzman's algorithm, the optimal strategy when the distribution is known. We further adapt this method to practical LLM settings by addressing reward scaling across prompts via a Bradley-Terry inspired transformation. This leads to an adaptive inference-time optimization method that normalizes rewards and learns stopping thresholds on the fly. Experiments on the AlpacaFarm and HH-RLHF datasets, using multiple LLM-reward model pairs, show that our adaptive strategy can obtain the same performance as non-adaptive Best-of-N sampling while requiring 15-35 percent fewer generations on average. Our results establish a principled bridge between optimal stopping theory and inference-time scaling, providing both theoretical performance bounds and practical efficiency gains for LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

Balancing LLM output quality with inference cost during generation
Deciding when to stop generating without knowing reward distributions
Reducing required generations while maintaining performance levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

UCB-style Pandora's Box algorithm for optimal stopping
Bradley-Terry transformation for reward normalization
Adaptive threshold learning for reducing generation counts
🔎 Similar Papers
No similar papers found.