π€ AI Summary
Existing methods struggle to effectively evaluate the quality of diverse or degenerate reasoning trajectories generated by language models from a human cognitive perspective, exhibiting limited generalizability. To address this challenge, this work proposes MarODE, a novel framework that uniquely integrates Markov processes with ordinary differential equations (ODEs) to model the dynamic evolution of reasoning trajectories. MarODE establishes a theoretically grounded, human-aligned evaluation paradigm that captures the temporal and structural nuances of human judgment. Through human-centric perturbation testing and large-scale empirical validation, MarODE demonstrates superior performance, surpassing current baselines by over 250% in Somersβ D correlation, thereby significantly enhancing both the accuracy and generalizability of reasoning quality assessment.
π Abstract
Reasoning traces produced by generative language models are increasingly used for tasks ranging from mathematical problem solving to automated fact checking. However, existing evaluation methods remain largely mechanical and fail to capture human-centric notions of reasoning quality in a way that generalizes across varied and progressively degraded reasoning. We introduce MarODE, an offline evaluation framework that assigns quality scores to reasoning traces. Its effectiveness is assessed using human-centric perturbations and human judgments, which jointly evaluate the fundamental dimensions of an evaluation metric - goodness and soundness. The approach is grounded in a Markovian formulation of reasoning progression and an ordinary differential equation based characterization of trace dynamics, enabling efficient evaluation of reasoning quality. In a large-scale evaluation, MarODE outperforms existing baselines by over 250% under Somers' D correlation. Our results emphasize the value of theory-driven evaluation frameworks as reasoning traces become central to language model-based systems.