🤖 AI Summary
Existing approaches struggle to effectively characterize the high-level behavioral traits of large language models in strategic interaction settings. This work introduces activation steering into game-theoretic environments for the first time, constructing “role vectors” that encode prosocial attributes—such as altruism, forgiveness, and theory of mind—through contrastive activation addition. Within classical game frameworks, these role vectors systematically modulate both the model’s strategic decisions and its accompanying linguistic justifications. The proposed method not only enables targeted intervention over the model’s behavior and explanations but also uncovers a potential misalignment between them, thereby demonstrating the efficacy of role vectors for mechanistic interpretability. This approach offers a novel pathway toward understanding and manipulating the high-level cognitive traits of large language models in strategic contexts.
📝 Abstract
Large language models (LLMs) are increasingly deployed as autonomous decision-makers in strategic settings, yet we have limited tools for understanding their high-level behavioral traits. We use activation steering methods in game-theoretic settings, constructing persona vectors for altruism, forgiveness, and expectations of others by contrastive activation addition. Evaluating on canonical games, we find that activation steering systematically shifts both quantitative strategic choices and natural-language justifications. However, we also observe that rhetoric and strategy can diverge under steering. In addition, vectors for self-behavior and expectations of others are partially distinct. Our results suggest that persona vectors offer a promising mechanistic handle on high-level traits in strategic environments.