Towards Inference-time Scaling for Continuous Space Reasoning

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether inference-time scaling techniques—such as multi-path sampling and reward-model re-ranking—can effectively transfer to continuous thought spaces. Methodologically, building upon the COCONUT framework, it employs dropout-driven diverse reasoning path generation and integrates both process-based (PRM) and outcome-based (ORM) reward models for re-ranking. Geometric analysis and trajectory dynamics probing further reveal that the fundamental challenge in continuous spaces—indistinguishability between correct and incorrect reasoning—stems from insufficient discriminative structural priors in current representations. The core contribution is threefold: (1) the first systematic validation of multi-path sampling feasibility in continuous reasoning spaces; (2) identification that gains from discrete-space paradigms are fundamentally limited by weak representation discriminability; and (3) a novel direction advocating explicit injection of discriminative structural priors during training. Experiments confirm that while current methods improve Pass@N, the bottleneck lies at the representational level—not the search strategy.

Technology Category

Application Category

📝 Abstract
Inference-time scaling through multiple sample generation in combination with Process- or Outcome-Reward Model (PRM or ORM) re-ranking has proven effective for text-based reasoning in large language models. This paper investigates whether such established techniques can be successfully adapted to reasoning in the continuous space, using COCONUT (Hao et al. 2024) continuous space reasoning LM as the backbone. We demonstrate the feasibility of generating diverse reasoning paths through dropout-based sampling. Our Pass@N analysis on the generated samples reveals the potential that could enable a significant gain in performance akin to observed gain in the discrete space. However, we highlight unique challenges faced for materializing this gain in the continuous thought space. In particular, working recipes for data generation and training PRM and ORM models in the discrete space unlocks only marginal improvements in the continuous space. Through probing various aspects including geometric properties and trajectory dynamics we identify the underlying reasons that prevent effective discrimination between correct and incorrect reasoning (essential for the functioning of PRM and ORM). Our findings reveal that current limitations stem from the absence of key inductive biases in continuous thought representations. We argue that the training frameworks for continuous reasoning LMs require not only to optimize for accuracy but also to explicitly incorporate inductive biases that could be utilized during inference-time for discrimination of correct and incorrect thoughts.footnote{Our code and data will be publicly available.}
Problem

Research questions and friction points this paper is trying to address.

Adapting inference-time scaling techniques to continuous space reasoning
Investigating challenges in discriminating correct vs incorrect continuous reasoning paths
Addressing limitations in inductive biases for continuous thought representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses dropout sampling for diverse reasoning paths
Applies PRM/ORM re-ranking in continuous space
Identifies need for inductive biases in training
🔎 Similar Papers
No similar papers found.
M
Minghan Wang
Department of Data Science & AI, Monash University
Thuy-Trang Vu
Thuy-Trang Vu
Monash University
Natural Language ProcessingMachine Learning
Ehsan Shareghi
Ehsan Shareghi
Monash University
Natural Language Processing
G
Gholamreza Haffari
Department of Data Science & AI, Monash University