🤖 AI Summary
This work addresses the need for accessible, high-performance large language models (LLMs) in China’s K–12 mathematics education, proposing a lightweight LLM deployable efficiently on a single consumer-grade GPU. To enhance mathematical reasoning capability and training stability, we introduce three key innovations: (1) target entropy regularization to improve policy exploration; (2) recent sample recovery to mitigate catastrophic forgetting; and (3) policy-specific difficulty weighting to optimize sample selection. Further, we integrate curriculum-aligned pretraining, grouped relative advantage estimation, and dynamic data scheduling to significantly boost data efficiency and generalization. Our model achieves state-of-the-art performance across multiple mathematical reasoning benchmarks—outperforming models with several times its parameter count. The model and code are publicly released to advance educational equity and AI-driven educational accessibility.
📝 Abstract
We introduce Confucius3-Math, an open-source large language model with 14B parameters that (1) runs efficiently on a single consumer-grade GPU; (2) achieves SOTA performances on a range of mathematical reasoning tasks, outperforming many models with significantly larger sizes. In particular, as part of our mission to enhancing education and knowledge dissemination with AI, Confucius3-Math is specifically committed to mathematics learning for Chinese K-12 students and educators. Built via post-training with large-scale reinforcement learning (RL), Confucius3-Math aligns with national curriculum and excels at solving main-stream Chinese K-12 mathematical problems with low cost. In this report we share our development recipe, the challenges we encounter and the techniques we develop to overcome them. In particular, we introduce three technical innovations: Targeted Entropy Regularization, Recent Sample Recovery and Policy-Specific Hardness Weighting. These innovations encompass a new entropy regularization, a novel data scheduling policy, and an improved group-relative advantage estimator. Collectively, they significantly stabilize the RL training, improve data efficiency, and boost performance. Our work demonstrates the feasibility of building strong reasoning models in a particular domain at low cost. We open-source our model and code at https://github.com/netease-youdao/Confucius3-Math.