🤖 AI Summary
This study addresses the limited cross-domain factual consistency evaluation accuracy of large language models (LLMs). Methodologically, it proposes a human-feedback-driven metric fusion framework that—novelty—introduces a tree-based model to dynamically weight multiple foundational factuality metrics, thereby approximating human judgments. A unified, human-annotated dataset spanning both question-answering and dialogue scenarios is constructed, and feature importance analysis is conducted to ensure reproducible evaluation. The core contributions are: (1) standardization of factuality measurement across domains and tasks; and (2) a fused metric achieving significantly higher correlation with human judgments than any individual metric, attaining state-of-the-art performance across diverse domains. Empirical results demonstrate substantial improvements in both accuracy and generalizability of LLM output credibility assessment.
📝 Abstract
We present a methodology for improving the accuracy of faithfulness evaluation in Large Language Models (LLMs). The proposed methodology is based on the combination of elementary faithfulness metrics into a combined (fused) metric, for the purpose of improving the faithfulness of LLM outputs. The proposed strategy for metric fusion deploys a tree-based model to identify the importance of each metric, which is driven by the integration of human judgements evaluating the faithfulness of LLM responses. This fused metric is demonstrated to correlate more strongly with human judgements across all tested domains for faithfulness. Improving the ability to evaluate the faithfulness of LLMs, allows for greater confidence to be placed within models, allowing for their implementation in a greater diversity of scenarios. Additionally, we homogenise a collection of datasets across question answering and dialogue-based domains and implement human judgements and LLM responses within this dataset, allowing for the reproduction and trialling of faithfulness evaluation across domains.