LLM-as-a-Judge for Time Series Explanations

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of a general framework for evaluating the faithfulness of natural language explanations generated by large language models (LLMs) for time series data without relying on reference texts. It proposes the first reference-free evaluation method, leveraging LLMs to perform ternary correctness judgments based on pattern recognition, numerical accuracy, and answer faithfulness. By integrating a synthetic benchmark with a data-driven mechanism, the approach enables stable and reliable scoring and ranking of generated explanations without requiring ground-truth labels. Experimental results demonstrate that, despite substantial variability in LLMs’ generation performance (accuracy ranging from 0.00 to 0.96), their capacity for faithful evaluation remains highly robust, thereby validating the effectiveness and innovative potential of using LLMs as evaluators of time series explanations.
📝 Abstract
Evaluating factual correctness of LLM generated natural language explanations grounded in time series data remains an open challenge. Although modern models generate textual interpretations of numerical signals, existing evaluation methods are limited: reference based similarity metrics and consistency checking models require ground truth explanations, while traditional time series methods operate purely on numerical values and cannot assess free form textual reasoning. Thus, no general purpose method exists to directly verify whether an explanation is faithful to underlying time series data without predefined references or task specific rules. We study large language models as both generators and evaluators of time series explanations in a reference free setting, where given a time series, question, and candidate explanation, the evaluator assigns a ternary correctness label based on pattern identification, numeric accuracy, and answer faithfulness, enabling principled scoring and comparison. To support this, we construct a synthetic benchmark of 350 time series cases across seven query types, each paired with correct, partially correct, and incorrect explanations. We evaluate models across four tasks: explanation generation, relative ranking, independent scoring, and multi anomaly detection. Results show a clear asymmetry: generation is highly pattern dependent and exhibits systematic failures on certain query types, with accuracies ranging from 0.00 to 0.12 for Seasonal Drop and Volatility Shift, to 0.94 to 0.96 for Structural Break, while evaluation is more stable, with models correctly ranking and scoring explanations even when their own outputs are incorrect. These findings demonstrate feasibility of data grounded LLM based evaluation for time series explanations and highlight their potential as reliable evaluators of data grounded reasoning in the time series domain.
Problem

Research questions and friction points this paper is trying to address.

LLM-as-a-Judge
time series explanations
factual correctness
reference-free evaluation
faithfulness
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-as-a-Judge
time series explanations
reference-free evaluation
factual correctness
synthetic benchmark