🤖 AI Summary
This work proposes a novel approach to evaluating the metareasoning capabilities of large language models in complex tasks—specifically, their ability to plan and monitor intermediate reasoning steps rather than relying solely on final answer accuracy. To this end, the authors design a multi-hop table-based question answering task that requires models to decompose questions, invoke tools to retrieve geopolitical indicators, and perform numerical computations. The study is the first to explicitly distinguish between metareasoning and object-level reasoning and introduces “necessary actions” as a fine-grained metric for process evaluation. Experimental results indicate that current models exhibit some metareasoning capacity but still struggle with task comprehension and numerical calculation, and that n-shot prompting yields limited performance gains.
📝 Abstract
Recent advancements in Large Language Models (LLMs) are increasingly focused on"reasoning"ability, a concept with many overlapping definitions in the LLM discourse. We take a more structured approach, distinguishing meta-level reasoning (denoting the process of reasoning about intermediate steps required to solve a task) from object-level reasoning (which concerns the low-level execution of the aforementioned steps.) We design a novel question answering task, which is based around the values of geopolitical indicators for various countries over various years. Questions require breaking down into intermediate steps, retrieval of data, and mathematical operations over that data. The meta-level reasoning ability of LLMs is analysed by examining the selection of appropriate tools for answering questions. To bring greater depth to the analysis of LLMs beyond final answer accuracy, our task contains'essential actions'against which we can compare the tool call output of LLMs to infer the strength of reasoning ability. We find that LLMs demonstrate good meta-level reasoning on our task, yet are flawed in some aspects of task understanding. We find that n-shot prompting has little effect on accuracy; error messages encountered do not often deteriorate performance; and provide additional evidence for the poor numeracy of LLMs. Finally, we discuss the generalisation and limitation of our findings to other task domains.