🤖 AI Summary
This study addresses the lack of automated compliance auditing in the over 500 million annual insurance benefit verification calls within the U.S. healthcare sector by introducing the first publicly available dialogue dataset and evaluation benchmark for compliance-aware voice agents. It proposes two novel tasks—phase boundary detection and compliance verification—and develops a structured annotation framework covering key stages such as IVR navigation, patient identification, and medication verification. The dataset combines real AI-initiated outbound calls with synthetically generated dialogues, annotated using phased JSON schemas and explicit question-answer logic. Experimental results show that smaller models perform well on segmented subtasks, yet accurately segmenting entire conversations end-to-end remains challenging, highlighting a critical gap between natural conversational flow and audit-grade evidentiary requirements.
📝 Abstract
Administrative phone tasks drain roughly 1 trillion USD annually from U.S. healthcare, with over 500 million insurance-benefit verification calls manually handled in 2024. We introduce INSURE-Dial, to our knowledge the first public benchmark for developing and assessing compliance-aware voice agents for phase-aware call auditing with span-based compliance verification. The corpus includes 50 de-identified, AI-initiated calls with live insurance representatives (mean 71 turns/call) and 1,000 synthetically generated calls that mirror the same workflow. All calls are annotated with a phase-structured JSON schema covering IVR navigation, patient identification, coverage status, medication checks (up to two drugs), and agent identification (CRN), and each phase is labeled for Information and Procedural compliance under explicit ask/answer logic. We define two novel evaluation tasks: (1) Phase Boundary Detection (span segmentation under phase-specific acceptance rules) and (2) Compliance Verification (IC/PC decisions given fixed spans). Per-phase scores are strong across small, low-latency baselines, but end-to-end reliability is constrained by span-boundary errors. On real calls, full-call exact segmentation is low, showing a gap between conversational fluency and audit-grade evidence.