ChatChecker: A Framework for Dialogue System Testing and Evaluation Through Non-cooperative User Simulation

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary dialogue systems typically adopt an integrated architecture combining large language models (LLMs), external tools, and databases; thus, evaluating only the underlying LLM fails to ensure end-to-end quality. Existing evaluation methods predominantly focus on single-turn analysis and lack automated, process-aware testing for full conversational trajectories. Method: We propose the first end-to-end testing framework based on non-cooperative user simulation: (1) a challenging, role-driven user simulator requiring no reference dialogues or system-internal knowledge; (2) a fine-grained error taxonomy to guide prompt optimization and enhance detection of dialogue failures and anomalies; and (3) a decoupled architecture enabling low-cost configuration and cross-system portability. Contribution/Results: Experiments demonstrate substantial improvements in defect detection rates, with strong generalizability, scalability, and robustness across diverse dialogue systems and evaluation settings.

Technology Category

Application Category

📝 Abstract
While modern dialogue systems heavily rely on large language models (LLMs), their implementation often goes beyond pure LLM interaction. Developers integrate multiple LLMs, external tools, and databases. Therefore, assessment of the underlying LLM alone does not suffice, and the dialogue systems must be tested and evaluated as a whole. However, this remains a major challenge. With most previous work focusing on turn-level analysis, less attention has been paid to integrated dialogue-level quality assurance. To address this, we present ChatChecker, a framework for automated evaluation and testing of complex dialogue systems. ChatChecker uses LLMs to simulate diverse user interactions, identify dialogue breakdowns, and evaluate quality. Compared to previous approaches, our design reduces setup effort and is generalizable, as it does not require reference dialogues and is decoupled from the implementation of the target dialogue system. We improve breakdown detection performance over a prior LLM-based approach by including an error taxonomy in the prompt. Additionally, we propose a novel non-cooperative user simulator based on challenging personas that uncovers weaknesses in target dialogue systems more effectively. Through this, ChatChecker contributes to thorough and scalable testing. This enables both researchers and practitioners to accelerate the development of robust dialogue systems.
Problem

Research questions and friction points this paper is trying to address.

Testing integrated dialogue systems beyond turn-level analysis
Automated evaluation without requiring reference dialogues
Simulating non-cooperative users to uncover system weaknesses
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based non-cooperative user simulation
Error taxonomy-enhanced breakdown detection
Generalizable framework without reference dialogues
🔎 Similar Papers
No similar papers found.