🤖 AI Summary
Current large reasoning models prioritize answer accuracy and token efficiency while neglecting core trustworthiness dimensions—interpretability, faithfulness, and reliability. This work introduces ReFIne, the first framework to jointly model all three. ReFIne explicitly captures reasoning logic via structured, labeled reasoning paths; ensures decision faithfulness through cross-fragment consistent citation; and quantifies answer confidence via an endogenous self-assessment module. The method integrates supervised fine-tuning with GRPO-based optimization to support high-level planning and self-diagnosis. Evaluated on multiple-scale Qwen3 models, ReFIne achieves +44.0% improvement in reasoning trace clarity, +18.8% gain in decision faithfulness, and +42.4% enhancement in confidence estimation reliability. These results mark a paradigm shift from accuracy-centric to trustworthiness-driven reasoning models.
📝 Abstract
Recent advances in long chain-of-thought (CoT) reasoning have largely prioritized answer accuracy and token efficiency, while overlooking aspects critical to trustworthiness. We argue that usable reasoning systems must be trustworthy, characterized by three properties: interpretability, faithfulness, and reliability. To this end, we propose ReFIne, a new training framework that integrates supervised fine-tuning with GRPO to encourage models to: (i) improve interpretability by producing structured, tag-based traces with high-level planning that are easier for humans to follow; (ii) enhance faithfulness by explicitly disclosing the decisive information guiding each solution, with consistent cross-section references; and (iii) promote reliability by providing self-assessments of both the derivation's soundness and the confidence of the final answer. We apply ReFIne to the Qwen3 models at multiple scales (1.7B/4B/8B) and evaluate across mathematical benchmarks of varying difficulty. Our experimental results show that ReFIne models generate clearer and better-structured reasoning traces (interpretability +44.0%), more faithfully expose their underlying decision process (faithfulness +18.8%), and offer informative confidence estimates (reliability +42.4%). These findings highlight an overlooked but important direction: reasoning models should be optimized not only for accuracy, but also for broader dimensions of trustworthiness. Our code is available at: https://github.com/Trustworthy-ML-Lab/Training_Trustworthy_LRM_with_Refine