🤖 AI Summary
This study addresses the identification and elicitation of complex behaviors exhibited by large language models (LLMs) in multi-turn dialogues to enhance dynamic evaluation capabilities.
Method: We propose the first systematic behavior elicitation and analysis framework, categorizing approaches into prior-knowledge-driven, offline interactive, and online interactive methods. We introduce a unified multi-turn online elicitation paradigm, formalized via a general-purpose formula compatible with both single- and multi-turn settings. Our method integrates online black-box query optimization, dialogue state modeling, and budget-constrained evaluation.
Contribution/Results: We empirically demonstrate the superiority of dynamic interaction over static benchmarks. With only thousands of queries, our approach achieves behavior elicitation success rates of 45%, 19%, and 77% across three task categories—substantially outperforming static baselines (near-zero detection). This work provides both theoretical foundations and practical methodologies for constructing dynamic dialogue evaluation benchmarks.
📝 Abstract
Identifying specific and often complex behaviors from large language models (LLMs) in conversational settings is crucial for their evaluation. Recent work proposes novel techniques to find natural language prompts that induce specific behaviors from a target model, yet they are mainly studied in single-turn settings. In this work, we study behavior elicitation in the context of multi-turn conversations. We first offer an analytical framework that categorizes existing methods into three families based on their interactions with the target model: those that use only prior knowledge, those that use offline interactions, and those that learn from online interactions. We then introduce a generalized multi-turn formulation of the online method, unifying single-turn and multi-turn elicitation. We evaluate all three families of methods on automatically generating multi-turn test cases. We investigate the efficiency of these approaches by analyzing the trade-off between the query budget, i.e., the number of interactions with the target model, and the success rate, i.e., the discovery rate of behavior-eliciting inputs. We find that online methods can achieve an average success rate of 45/19/77% with just a few thousand queries over three tasks where static methods from existing multi-turn conversation benchmarks find few or even no failure cases. Our work highlights a novel application of behavior elicitation methods in multi-turn conversation evaluation and the need for the community to move towards dynamic benchmarks.