🤖 AI Summary
This study addresses the challenge of performing high-order cognitive tasks—such as deciphering fictional terror conspiracies—within complex, ambiguous data by systematically comparing the performance of large language models (specifically GPT-4) under holistic versus stepwise reasoning strategies. By integrating cognitive task decomposition with hypothesis generation techniques, the work presents the first controlled experiment evaluating the efficacy of these two approaches when augmented by LLMs. The findings demonstrate that LLMs can effectively support pattern recognition and the generation of coherent insights from intricate datasets, thereby offering a novel paradigm and empirical foundation for designing efficient human-AI collaborative cognitive frameworks.
📝 Abstract
Sensemaking tasks often entail navigating through complex, ambiguous data to construct coherent insights. Prior work has shown that crowds can effectively distribute cognitive load, pooling diverse perspectives to enhance analytical depth. Recent advancements in LLMs have further expanded the toolkit for sensemaking, offering scalable data processing, complex pattern recognition, and the ability to infer and propose meaningful hypotheses. In this study, we explore how LLMs (i.e., GPT-4) can assist in a complex sensemaking task of deciphering fictional terrorist plots. We explore two different approaches for leveraging GPT-4's capabilities: a holistic sensemaking process and a step-by-step approach. Our preliminary investigations open the doors for future research into optimizing human-AI collaborative workflows, aiming to harness the complementary strengths of both for more effective sensemaking in complex scenarios.