🤖 AI Summary
Current large language models (LLMs) exhibit only superficial, task-dependent stylistic patterns when emulating human traits (e.g., personality, values), lacking cross-task consistency and stability. To address this, we propose *in-context reflective optimization*—a fine-tuning-free, zero-shot personality elicitation framework grounded in theory of mind. Leveraging information-theoretic principles, it iteratively generates and refines self-reflective textual prompts that model first-person experiential awareness, compress redundant representations, and strengthen semantic alignment between traits and behavioral manifestations. Our method achieves, for the first time, stable cross-task transfer of a single self-reflective prompt across three major personality frameworks (e.g., Big Five, HEXACO, MBTI). Empirical evaluation demonstrates significant gains over strong baselines, with markedly improved consistency and generalization across diverse downstream tasks—including dialogue, reasoning, and creative generation—while preserving trait fidelity and behavioral coherence.
📝 Abstract
Trained on various human-authored corpora, Large Language Models (LLMs) have demonstrated a certain capability of reflecting specific human-like traits (e.g., personality or values) by prompting, benefiting applications like personalized LLMs and social simulations. However, existing methods suffer from the superficial elicitation problem: LLMs can only be steered to mimic shallow and unstable stylistic patterns, failing to embody the desired traits precisely and consistently across diverse tasks like humans. To address this challenge, we propose IROTE, a novel in-context method for stable and transferable trait elicitation. Drawing on psychological theories suggesting that traits are formed through identity-related reflection, our method automatically generates and optimizes a textual self-reflection within prompts, which comprises self-perceived experience, to stimulate LLMs' trait-driven behavior. The optimization is performed by iteratively maximizing an information-theoretic objective that enhances the connections between LLMs' behavior and the target trait, while reducing noisy redundancy in reflection without any fine-tuning, leading to evocative and compact trait reflection. Extensive experiments across three human trait systems manifest that one single IROTE-generated self-reflection can induce LLMs' stable impersonation of the target trait across diverse downstream tasks beyond simple questionnaire answering, consistently outperforming existing strong baselines.