🤖 AI Summary
Current human-AI collaborative decision-making faces two fundamental gaps: misalignment between AI systems and human values, and underutilization of AI’s potential as a competent team member. To address these, we propose a novel four-dimensional framework—*Construct*, *Coordinate*, *Maintain*, and *Train*—grounded in Team Situational Awareness (Team SA) theory. This is the first systematic articulation of value alignment and capability activation problems across the full lifecycle of human-AI teams. Methodologically, we integrate human factors engineering, multi-agent modeling, and explainable AI (XAI) to design interaction protocols supporting dynamic task delegation, adaptive responsibility allocation, and trust evolution. Our contributions include: (1) a structured research paradigm for human-AI teaming; (2) empirically grounded design principles for sustainable high-performance collaboration; and (3) a forward-looking research agenda that advances AI from a passive tool to an active, learning-capable, adaptive, and autonomous collaborative partner.
📝 Abstract
Artificial Intelligence (AI) is advancing at an unprecedented pace, with clear potential to enhance decision-making and productivity. Yet, the collaborative decision-making process between humans and AI remains underdeveloped, often falling short of its transformative possibilities. This paper explores the evolution of AI agents from passive tools to active collaborators in human-AI teams, emphasizing their ability to learn, adapt, and operate autonomously in complex environments. This paradigm shifts challenges traditional team dynamics, requiring new interaction protocols, delegation strategies, and responsibility distribution frameworks. Drawing on Team Situation Awareness (SA) theory, we identify two critical gaps in current human-AI teaming research: the difficulty of aligning AI agents with human values and objectives, and the underutilization of AI's capabilities as genuine team members. Addressing these gaps, we propose a structured research outlook centered on four key aspects of human-AI teaming: formulation, coordination, maintenance, and training. Our framework highlights the importance of shared mental models, trust-building, conflict resolution, and skill adaptation for effective teaming. Furthermore, we discuss the unique challenges posed by varying team compositions, goals, and complexities. This paper provides a foundational agenda for future research and practical design of sustainable, high-performing human-AI teams.