Evaluating Contrastive Feedback for Effective User Simulations

📅 2025-05-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low fidelity of large language model (LLM)-based user simulators in interactive information retrieval. We propose a contrastive-feedback-driven prompt engineering method that, for the first time, incorporates contrastive training principles into user simulation. Specifically, we inject paired summaries of relevant and irrelevant documents into the LLM’s context to implicitly model the user’s knowledge state—thereby improving consistency in relevance judgments and authenticity in retrieval behavior. Crucially, our approach requires no parameter fine-tuning; instead, it leverages context-aware prompting to guide knowledge-state representation. Experiments demonstrate that mixed contrastive summaries substantially outperform unidirectional prompts, yielding significant gains across multiple simulation fidelity metrics—including behavioral consistency, query reformulation realism, and session-level coherence. Our method establishes a new paradigm for developing high-fidelity, interpretable LLM-based user agents without architectural or parametric modifications.

Technology Category

Application Category

📝 Abstract
The use of Large Language Models (LLMs) for simulating user behavior in the domain of Interactive Information Retrieval has recently gained significant popularity. However, their application and capabilities remain highly debated and understudied. This study explores whether the underlying principles of contrastive training techniques, which have been effective for fine-tuning LLMs, can also be applied beneficially in the area of prompt engineering for user simulations. Previous research has shown that LLMs possess comprehensive world knowledge, which can be leveraged to provide accurate estimates of relevant documents. This study attempts to simulate a knowledge state by enhancing the model with additional implicit contextual information gained during the simulation. This approach enables the model to refine the scope of desired documents further. The primary objective of this study is to analyze how different modalities of contextual information influence the effectiveness of user simulations. Various user configurations were tested, where models are provided with summaries of already judged relevant, irrelevant, or both types of documents in a contrastive manner. The focus of this study is the assessment of the impact of the prompting techniques on the simulated user agent performance. We hereby lay the foundations for leveraging LLMs as part of more realistic simulated users.
Problem

Research questions and friction points this paper is trying to address.

Evaluating contrastive feedback for improving user simulations with LLMs
Assessing impact of contextual information on simulated user performance
Exploring prompt engineering techniques for realistic user behavior simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses contrastive training for prompt engineering
Enhances LLMs with implicit contextual information
Tests contrastive document summaries in simulations
🔎 Similar Papers
No similar papers found.