🤖 AI Summary
To address the limitations of reinforcement learning—namely sparse rewards and low sample efficiency—in long-context mathematical and programming reasoning tasks, this paper proposes Semantic Soft Bootstrapping (SSB), a self-distillation training framework that requires no reinforcement learning, human annotation, or dense reward signals. SSB employs the same model as both teacher and student, leveraging semantic context to guide reliable chain-of-thought generation; it automatically constructs pedagogical pairs from correct solutions and representative errors. By integrating parameter-efficient fine-tuning with logits sequence matching, SSB optimizes Qwen2.5-3B-Instruct. It achieves +10.6% and +10% accuracy gains on MATH500 and AIME2024, respectively, substantially outperforming RL-based baselines such as GRPO. The core contribution is the first demonstration of purely supervised, high-fidelity self-distillation for long-horizon reasoning—achieving strong interpretability and training efficiency simultaneously.
📝 Abstract
Long context reasoning in large language models (LLMs) has demonstrated enhancement of their cognitive capabilities via chain-of-thought (CoT) inference. Training such models is usually done via reinforcement learning with verifiable rewards (RLVR) in reasoning based problems, like math and programming. However, RLVR is limited by several bottlenecks, such as, lack of dense reward, and inadequate sample efficiency. As a result, it requires significant compute resources in post-training phase. To overcome these limitations, in this work, we propose extbf{Semantic Soft Bootstrapping (SSB)}, a self-distillation technique, in which the same base language model plays the role of both teacher and student, but receives different semantic contexts about the correctness of its outcome at training time. The model is first prompted with a math problem and several rollouts are generated. From them, the correct and most common incorrect response are filtered, and then provided to the model in context to produce a more robust, step-by-step explanation with a verified final answer. This pipeline automatically curates a paired teacher-student training set from raw problem-answer data, without any human intervention. This generation process also produces a sequence of logits, which is what the student model tries to match in the training phase just from the bare question alone. In our experiment, Qwen2.5-3B-Instruct on GSM8K dataset via parameter-efficient fine-tuning. We then tested its accuracy on MATH500, and AIME2024 benchmarks. Our experiments show a jump of 10.6%, and 10% improvements in accuracy, respectively, over group relative policy optimization (GRPO), which is a commonly used RLVR algorithm. Our code is available at https://github.com/purbeshmitra/semantic-soft-bootstrapping, and the model, curated dataset is available at https://huggingface.co/purbeshmitra/semantic-soft-bootstrapping.