Vital Insight: Assisting Experts' Context-Driven Sensemaking of Multi-modal Personal Tracking Data Using Visualization and Human-In-The-Loop LLM Agents

📅 2024-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of transforming multimodal passive tracking data (from smartphones and wearables) into high-level, context-aware insights. We propose an LLM-augmented, human-in-the-loop visual analytics paradigm that integrates interactive visualization, lightweight large language model agents, and human-centered design to construct an expert-perception modeling framework. Through a three-round, technology-probe-based empirical study with 21 domain experts, we validate the system’s effectiveness in enhancing insight exploration efficiency and analytical credibility. Our contributions are threefold: (1) the first AI–visualization co-reasoning framework tailored for expert interpretation of personal tracking data; (2) seven reusable design principles for AI-enhanced visualization; and (3) open-sourced, reproducible expert-perception models and design insights to advance human–AI collaborative analytics systems.

Technology Category

Application Category

📝 Abstract
Passive tracking methods, such as phone and wearable sensing, have become dominant in monitoring human behaviors in modern ubiquitous computing studies. While there have been significant advances in machine-learning approaches to translate periods of raw sensor data to model momentary behaviors, (e.g., physical activity recognition), there still remains a significant gap in the translation of these sensing streams into meaningful, high-level, context-aware insights that are required for various applications (e.g., summarizing an individual's daily routine). To bridge this gap, experts often need to employ a context-driven sensemaking process in real-world studies to derive insights. This process often requires manual effort and can be challenging even for experienced researchers due to the complexity of human behaviors. We conducted three rounds of user studies with 21 experts to explore solutions to address challenges with sensemaking. We follow a human-centered design process to identify needs and design, iterate, build, and evaluate Vital Insight (VI), a novel, LLM-assisted, prototype system to enable human-in-the-loop inference (sensemaking) and visualizations of multi-modal passive sensing data from smartphones and wearables. Using the prototype as a technology probe, we observe experts' interactions with it and develop an expert sensemaking model that explains how experts move between direct data representations and AI-supported inferences to explore, question, and validate insights. Through this iterative process, we also synthesize and discuss a list of design implications for the design of future AI-augmented visualization systems to better assist experts' sensemaking processes in multi-modal health sensing data.
Problem

Research questions and friction points this paper is trying to address.

Bridging gap between raw sensor data and context-aware insights.
Enhancing experts' sensemaking of multi-modal passive tracking data.
Designing AI-augmented visualization systems for health data analysis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-assisted system for multi-modal data
Human-in-the-loop inference and visualization
Context-driven sensemaking with AI support
🔎 Similar Papers
No similar papers found.
J
Jiachen Li
Northeastern University, Boston, Massachusetts, USA
J
Justin Steinberg
Northeastern University, Boston, Massachusetts, USA
Xiwen Li
Xiwen Li
The University of Utah
AI in audio-visual surveillanceComputer VisionAudio Processing
A
Akshat Choube
Northeastern University, Boston, Massachusetts, USA
B
Bingsheng Yao
Northeastern University, Boston, Massachusetts, USA
Dakuo Wang
Dakuo Wang
Northeastern University
Human-AI CollaborationHuman-Centered AIHuman-Computer InteractionAI for HealthcareCSCW
E
Elizabeth Mynatt
Northeastern University, Boston, Massachusetts, USA
Varun Mishra
Varun Mishra
Northeastern University
Mobile SensingmHealth