DEVAL: A Framework for Evaluating and Improving the Derivation Capability of Large Language Models

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates large language models’ (LLMs) derivation capability (DC)—their ability to reason about how output changes in response to systematic input modifications—a previously uncharacterized aspect of input-output dynamic reasoning. Method: We propose DEVAL, the first systematic evaluation framework that formally defines derivation relations and DC, accompanied by a multi-task benchmark. We further introduce derivation prompting (DP), a novel prompting technique designed to explicitly elicit and enhance DC. Contribution/Results: Empirical evaluation reveals that state-of-the-art LLMs exhibit consistently weak DC across diverse tasks. DP achieves an average 15.2% absolute improvement in DC across multiple models, significantly outperforming existing prompting methods. This work establishes the first rigorous foundation for assessing and improving DC, thereby addressing a critical gap in the evaluation and enhancement of dynamic input-output reasoning capabilities in LLMs.

Technology Category

Application Category

📝 Abstract
Assessing the reasoning ability of Large Language Models (LLMs) over data remains an open and pressing research question. Compared with LLMs, human reasoning can derive corresponding modifications to the output based on certain kinds of changes to the input. This reasoning pattern, which relies on abstract rules that govern relationships between changes of data, has not been comprehensively described or evaluated in LLMs. In this paper, we formally define this reasoning pattern as the Derivation Relation (DR) and introduce the concept of Derivation Capability (DC), i.e. applying DR by making the corresponding modification to the output whenever the input takes certain changes. To assess DC, a systematically constructed evaluation framework named DEVAL is proposed and used to evaluate five popular LLMs and one Large Reasoning Model in seven mainstream tasks. The evaluation results show that mainstream LLMs, such as GPT-4o and Claude3.5, exhibit moderate DR recognition capabilities but reveal significant drop-offs on applying DR effectively in problem-solving scenarios. To improve this, we propose a novel prompt engineering approach called Derivation Prompting (DP). It achieves an average improvement of 15.2% in DC for all tested LLMs, outperforming commonly used prompt engineering techniques.
Problem

Research questions and friction points this paper is trying to address.

Evaluating derivation capability of LLMs using abstract rules
Assessing reasoning over data changes in large language models
Improving derivation performance through novel prompting techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework evaluates derivation capability in LLMs
Derivation Prompting improves reasoning via abstract rules
Method enhances problem-solving by 15.2 percent
🔎 Similar Papers
No similar papers found.
Y
Yifan Li
East China Normal University, Shanghai, China
Q
Qin Li
East China Normal University, Shanghai, China
M
Min Zhang
East China Normal University, Shanghai, China
Peixin Wang
Peixin Wang
East China Normal University
Formal MethodsTrustworthy AIProgram Verification