🤖 AI Summary
This work investigates large language models’ (LLMs) derivation capability (DC)—their ability to reason about how output changes in response to systematic input modifications—a previously uncharacterized aspect of input-output dynamic reasoning.
Method: We propose DEVAL, the first systematic evaluation framework that formally defines derivation relations and DC, accompanied by a multi-task benchmark. We further introduce derivation prompting (DP), a novel prompting technique designed to explicitly elicit and enhance DC.
Contribution/Results: Empirical evaluation reveals that state-of-the-art LLMs exhibit consistently weak DC across diverse tasks. DP achieves an average 15.2% absolute improvement in DC across multiple models, significantly outperforming existing prompting methods. This work establishes the first rigorous foundation for assessing and improving DC, thereby addressing a critical gap in the evaluation and enhancement of dynamic input-output reasoning capabilities in LLMs.
📝 Abstract
Assessing the reasoning ability of Large Language Models (LLMs) over data remains an open and pressing research question. Compared with LLMs, human reasoning can derive corresponding modifications to the output based on certain kinds of changes to the input. This reasoning pattern, which relies on abstract rules that govern relationships between changes of data, has not been comprehensively described or evaluated in LLMs. In this paper, we formally define this reasoning pattern as the Derivation Relation (DR) and introduce the concept of Derivation Capability (DC), i.e. applying DR by making the corresponding modification to the output whenever the input takes certain changes. To assess DC, a systematically constructed evaluation framework named DEVAL is proposed and used to evaluate five popular LLMs and one Large Reasoning Model in seven mainstream tasks. The evaluation results show that mainstream LLMs, such as GPT-4o and Claude3.5, exhibit moderate DR recognition capabilities but reveal significant drop-offs on applying DR effectively in problem-solving scenarios. To improve this, we propose a novel prompt engineering approach called Derivation Prompting (DP). It achieves an average improvement of 15.2% in DC for all tested LLMs, outperforming commonly used prompt engineering techniques.