Unraveling Human-AI Teaming: A Review and Outlook

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current human-AI collaborative decision-making faces two fundamental gaps: misalignment between AI systems and human values, and underutilization of AI’s potential as a competent team member. To address these, we propose a novel four-dimensional framework—*Construct*, *Coordinate*, *Maintain*, and *Train*—grounded in Team Situational Awareness (Team SA) theory. This is the first systematic articulation of value alignment and capability activation problems across the full lifecycle of human-AI teams. Methodologically, we integrate human factors engineering, multi-agent modeling, and explainable AI (XAI) to design interaction protocols supporting dynamic task delegation, adaptive responsibility allocation, and trust evolution. Our contributions include: (1) a structured research paradigm for human-AI teaming; (2) empirically grounded design principles for sustainable high-performance collaboration; and (3) a forward-looking research agenda that advances AI from a passive tool to an active, learning-capable, adaptive, and autonomous collaborative partner.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) is advancing at an unprecedented pace, with clear potential to enhance decision-making and productivity. Yet, the collaborative decision-making process between humans and AI remains underdeveloped, often falling short of its transformative possibilities. This paper explores the evolution of AI agents from passive tools to active collaborators in human-AI teams, emphasizing their ability to learn, adapt, and operate autonomously in complex environments. This paradigm shifts challenges traditional team dynamics, requiring new interaction protocols, delegation strategies, and responsibility distribution frameworks. Drawing on Team Situation Awareness (SA) theory, we identify two critical gaps in current human-AI teaming research: the difficulty of aligning AI agents with human values and objectives, and the underutilization of AI's capabilities as genuine team members. Addressing these gaps, we propose a structured research outlook centered on four key aspects of human-AI teaming: formulation, coordination, maintenance, and training. Our framework highlights the importance of shared mental models, trust-building, conflict resolution, and skill adaptation for effective teaming. Furthermore, we discuss the unique challenges posed by varying team compositions, goals, and complexities. This paper provides a foundational agenda for future research and practical design of sustainable, high-performing human-AI teams.
Problem

Research questions and friction points this paper is trying to address.

Enhancing human-AI collaborative decision-making processes
Aligning AI agents with human values and objectives
Developing frameworks for effective human-AI team dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI agents learn and adapt autonomously
New interaction protocols and delegation strategies
Shared mental models and trust-building frameworks
🔎 Similar Papers
No similar papers found.
B
Bowen Lou
University of Southern California
Tian Lu
Tian Lu
Arizona State University
Human-AI collaborationImpact of AI and big dataFinTechE-commerceSharing economy
R
Raghu Santanam
Arizona State University
Y
Yingjie Zhang
Peking University