PerspAct: Enhancing LLM Situated Collaboration Skills through Perspective Taking and Active Vision

📅 2025-11-11
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Current large language models (LLMs) and multimodal models exhibit limited perspective-taking capabilities in multi-agent collaboration, hindering accurate modeling of subjective agent perceptions and multi-observer environments. To address this, we propose PerspAct—a novel method that integrates active visual exploration with the ReAct reasoning framework for the first time. PerspAct explicitly samples and models diverse agent-centric perspectives, enabling dynamic comprehension of hierarchical perspective complexity in an extended Director task. Built upon multimodal LLMs, it leverages prompt engineering and explicit state representation. We systematically evaluate PerspAct across seven progressively complex scenarios. Experiments demonstrate significant improvements in both coreference resolution and collaborative task accuracy, validating the efficacy of jointly modeling active perception and perspective understanding. Our work establishes a new paradigm for situational awareness in multi-agent settings.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) and multimodal foundation models have significantly broadened their application in robotics and collaborative systems. However, effective multi-agent interaction necessitates robust perspective-taking capabilities, enabling models to interpret both physical and epistemic viewpoints. Current training paradigms often neglect these interactive contexts, resulting in challenges when models must reason about the subjectivity of individual perspectives or navigate environments with multiple observers. This study evaluates whether explicitly incorporating diverse points of view using the ReAct framework, an approach that integrates reasoning and acting, can enhance an LLM's ability to understand and ground the demands of other agents. We extend the classic Director task by introducing active visual exploration across a suite of seven scenarios of increasing perspective-taking complexity. These scenarios are designed to challenge the agent's capacity to resolve referential ambiguity based on visual access and interaction, under varying state representations and prompting strategies, including ReAct-style reasoning. Our results demonstrate that explicit perspective cues, combined with active exploration strategies, significantly improve the model's interpretative accuracy and collaborative effectiveness. These findings highlight the potential of integrating active perception with perspective-taking mechanisms in advancing LLMs'application in robotics and multi-agent systems, setting a foundation for future research into adaptive and context-aware AI systems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM perspective-taking for multi-agent collaboration
Addressing referential ambiguity through active visual exploration
Improving interpretative accuracy in interactive multi-observer environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating reasoning and acting with ReAct framework
Incorporating active visual exploration in perspective-taking tasks
Using explicit perspective cues to improve collaborative accuracy
🔎 Similar Papers
No similar papers found.