🤖 AI Summary
This study addresses the problem of real-time Theory of Mind (ToM) reasoning in novel social scenarios through multimodal integration of linguistic and visual inputs. Linguistic input conveys abstract, dynamic semantic information, while visual input provides concrete, scene-specific cues. To bridge this gap, we propose the first framework that couples vision-language model (VLM) parsing with Bayesian inverse planning—enabling language-driven symbolic representation construction and vision-guided online modeling of rational agents. Our method integrates symbolic representation learning with cognitive-science-inspired task design. It achieves significant improvements over state-of-the-art methods on multiple established cognitive science benchmarks and newly introduced social reasoning tasks. Remarkably, even with a lightweight VLM, our approach accurately reproduces human social judgment distributions, demonstrating both the efficacy and interpretability of synergistic modeling of abstract semantics and concrete perception.
📝 Abstract
Drawing real world social inferences usually requires taking into account information from multiple modalities. Language is a particularly powerful source of information in social settings, especially in novel situations where language can provide both abstract information about the environment dynamics and concrete specifics about an agent that cannot be easily visually observed. In this paper, we propose Language-Informed Rational Agent Synthesis (LIRAS), a framework for drawing context-specific social inferences that integrate linguistic and visual inputs. LIRAS frames multimodal social reasoning as a process of constructing structured but situation-specific agent and environment representations - leveraging multimodal language models to parse language and visual inputs into unified symbolic representations, over which a Bayesian inverse planning engine can be run to produce granular probabilistic judgments. On a range of existing and new social reasoning tasks derived from cognitive science experiments, we find that our model (instantiated with a comparatively lightweight VLM) outperforms ablations and state-of-the-art models in capturing human judgments across all domains.