🤖 AI Summary
In digital health interventions, online AI systems must balance dynamic adaptability with scientific reproducibility—yet current practices fail to ensure cross-iteration data traceability, algorithmic auditability, and result comparability. This paper introduces the first end-to-end reproducible workflow for online AI, spanning its entire lifecycle. It integrates streaming data processing, versioned data storage, algorithmic audit logging, containerized deployment, and automated experiment tracking. Validated through multiple real-world deployments, the workflow enables complete, time-stamped recording and retrospective analysis of both data and model behavior across iterative updates. It significantly enhances algorithmic transparency, experimental reproducibility, and regulatory compliance. The core contribution lies in bridging the gap between adaptive learning and scientific rigor, establishing a methodological foundation and engineering paradigm for trustworthy, evolution-aware online AI in digital health.
📝 Abstract
Online artificial intelligence (AI) algorithms are an important component of digital health interventions. These online algorithms are designed to continually learn and improve their performance as streaming data is collected on individuals. Deploying online AI presents a key challenge: balancing adaptability of online AI with reproducibility. Online AI in digital interventions is a rapidly evolving area, driven by advances in algorithms, sensors, software, and devices. Digital health intervention development and deployment is a continuous process, where implementation - including the AI decision-making algorithm - is interspersed with cycles of re-development and optimization. Each deployment informs the next, making iterative deployment a defining characteristic of this field. This iterative nature underscores the importance of reproducibility: data collected across deployments must be accurately stored to have scientific utility, algorithm behavior must be auditable, and results must be comparable over time to facilitate scientific discovery and trustworthy refinement. This paper proposes a reproducible scientific workflow for developing, deploying, and analyzing online AI decision-making algorithms in digital health interventions. Grounded in practical experience from multiple real-world deployments, this workflow addresses key challenges to reproducibility across all phases of the online AI algorithm development life-cycle.