🤖 AI Summary
Current LLM reasoning environments face three critical bottlenecks: poor scalability due to reliance on expert annotation, weak generalization in gamified settings, and absence of formal verification mechanisms. To address these, we propose the Structured Inference Environment (SIE) framework—the first to enable automated construction of reasoning environments from large-scale structured data and verifiable generation of domain rules. SIE explicitly models schema and compositional reasoning chains, supporting exploratory inference under partial observability and multi-step logical deduction. Empirically, SIE achieves significant performance gains on structured reasoning benchmarks. Moreover, it successfully transfers compositional reasoning capabilities to cross-domain tasks—including mathematical theorem proving and formal logic—demonstrating strong generalization and robustness. Crucially, all generated rules are formally verifiable, ensuring correctness and interpretability.
📝 Abstract
Large language models (LLMs) have achieved significant advancements in reasoning capabilities through reinforcement learning (RL) via environmental exploration. As the intrinsic properties of the environment determine the abilities that LLMs can learn, the environment plays a important role in the RL finetuning process. An ideal LLM reasoning environment should possess three core characteristics: scalability, generalizable reasoning, and verifiability. However, existing mathematical and coding environments are difficult to scale due to heavy reliance on expert annotation, while the skills learned in game-based environments are too specialized to generalize. To bridge this gap, we introduce the extbf{S}tructured extbf{I}n-context extbf{E}nvironment (SIE) framework. SIE achieves scalability by automatically constructing reasoning environments from large-scale structured data, where the rich compositional patterns naturally support generalizable reasoning. Moreover, the explicit schemas and reasoning chains in structured data provide a foundation for rule-based verifiability. Experimental results show that SIE framework not only achieves substantial improvements in in-domain structured reasoning, but also enables the learned compositional reasoning skills to generalize effectively to out-of-domain mathematical and logical reasoning tasks. We further explored learning in information-limited partial SIEs and found that LLMs can infer the missing information through exploring the environment, leading to robust reasoning improvements and generalization performance.