š¤ AI Summary
This study addresses the challenge of optimizing human-AI team effectiveness in high-risk, collaborative intelligence, surveillance, and reconnaissance (ISR) missions. We empirically compared three AI teammate familiarization methodsādocument-based instruction, joint training, and no familiarizationāassessing their impact on human operatorsā strategy formation, risk preference, and control behavior. A controlled user study with 60 participants integrated behavioral metrics, semi-structured interviews, and human factors evaluations. Results show that document-based familiarization accelerates strategy adoption but induces excessive risk aversion; interactive joint training enhances risk tolerance and exploratory behavior; and a hybrid frameworkāintegrating guided documentation, structured hands-on practice, and unstructured interactionāachieves superior balance between team performance and decision interpretability. This work presents the first multidimensional empirical evaluation of AI familiarization pathways in high-stakes operational contexts and proposes a generalizable, evidence-informed hybrid familiarization design paradigm.
š Abstract
We compare three methods of familiarizing a human with an artificial intelligence (AI) teammate ("agent") prior to operation in a collaborative, fast-paced intelligence, surveillance, and reconnaissance (ISR) environment. In a between-subjects user study (n=60), participants either read documentation about the agent, trained alongside the agent prior to the mission, or were given no familiarization. Results showed that the most valuable information about the agent included details of its decision-making algorithms and its relative strengths and weaknesses compared to the human. This information allowed the familiarization groups to form sophisticated team strategies more quickly than the control group. Documentation-based familiarization led to the fastest adoption of these strategies, but also biased participants towards risk-averse behavior that prevented high scores. Participants familiarized through direct interaction were able to infer much of the same information through observation, and were more willing to take risks and experiment with different control modes, but reported weaker understanding of the agent's internal processes. Significant differences were seen between individual participants' risk tolerance and methods of AI interaction, which should be considered when designing human-AI control interfaces. Based on our findings, we recommend a human-AI team familiarization method that combines AI documentation, structured in-situ training, and exploratory interaction.