🤖 AI Summary
Large language models (LLMs) frequently produce erroneous intermediate reasoning steps, hallucinations, and formatting inconsistencies in mathematical reasoning, undermining the reliability of chain-of-thought generation and hindering the production of well-formed symbolic function expressions. To address this, we propose EDCIM—a framework integrating symbolic error detection with a multi-tiered LLM collaboration mechanism for end-to-end error identification and correction in explainable mathematical tasks. Its key contributions are: (1) explicit natural-language generation of executable solution functions—rather than opaque final answers; (2) dynamic trade-off between accuracy and computational cost via a single tunable hyperparameter; and (3) synergistic orchestration of lightweight open-source models and powerful multimodal LLMs to optimize the detection–correction pipeline. Experiments demonstrate that EDCIM achieves comparable or improved accuracy while substantially reducing both computational overhead and inference cost.
📝 Abstract
Recent large language models (LLMs) have demonstrated the ability to perform explicit multi-step reasoning such as chain-of-thought prompting. However, their intermediate steps often contain errors that can propagate leading to inaccurate final predictions. Additionally, LLMs still struggle with hallucinations and often fail to adhere to prescribed output formats, which is particularly problematic for tasks like generating mathematical expressions or source code. This work introduces EDCIM (Error Detection and Correction for Interpretable Mathematics), a method for detecting and correcting these errors in interpretable mathematics tasks, where the model must generate the exact functional form that explicitly solve the problem (expressed in natural language) rather than a black-box solution. EDCIM uses LLMs to generate a system of equations for a given problem, followed by a symbolic error-detection framework that identifies errors and provides targeted feedback for LLM-based correction. To optimize efficiency, EDCIM integrates lightweight, open-source LLMs with more powerful proprietary models, balancing cost and accuracy. This balance is controlled by a single hyperparameter, allowing users to control the trade-off based on their cost and accuracy requirements. Experimental results across different datasets show that EDCIM significantly reduces both computational and financial costs, while maintaining, and even improving, prediction accuracy when the balance is properly configured.