🤖 AI Summary
This work proposes a platform-agnostic, multimodal digital human modeling framework that addresses the limitations of existing AI-driven approaches, which are often constrained by platform or task specificity and lack reproducibility, scalability, and ethically compliant data reuse mechanisms. By decoupling perception, interaction modeling, and inference preparation, the framework treats neurophysiological signals—including EEG, EMG, EOG, PPG, and inertial data—as time-aligned, structured observables rather than internal model embeddings, thereby enabling flexible and ethically sound downstream reuse. Implemented using OpenBCI Galea hardware and the SuperTux gaming environment, the system leverages computational task primitives to achieve precise event annotation and interaction modeling. Internal validation confirms data integrity, stream continuity, and multimodal synchronization, demonstrating significant potential for scalable applications in accessible interaction and adaptive systems research.
📝 Abstract
Digital Human Modelling (DHM) is increasingly shaped by advances in AI, wearable biosensing, and interactive digital environments, particularly in research addressing accessibility and inclusion. However, many AI-enabled DHM approaches remain tightly coupled to specific platforms, tasks, or interpretative pipelines, limiting reproducibility, scalability, and ethical reuse. This paper presents a platform-agnostic DHM framework designed to support AI-ready multimodal interaction research by explicitly separating sensing, interaction modelling, and inference readiness. The framework integrates the OpenBCI Galea headset as a unified multimodal sensing layer, providing concurrent EEG, EMG, EOG, PPG, and inertial data streams, alongside a reproducible, game-based interaction environment implemented using SuperTux. Rather than embedding AI models or behavioural inference, physiological signals are represented as structured, temporally aligned observables, enabling downstream AI methods to be applied under appropriate ethical approval. Interaction is modelled using computational task primitives and timestamped event markers, supporting consistent alignment across heterogeneous sensors and platforms. Technical verification via author self-instrumentation confirms data integrity, stream continuity, and synchronisation; no human-subjects evaluation or AI inference is reported. Scalability considerations are discussed with respect to data throughput, latency, and extension to additional sensors or interaction modalities. Illustrative use cases demonstrate how the framework can support AI-enabled DHM and HCI studies, including accessibility-oriented interaction design and adaptive systems research, without requiring architectural modifications. The proposed framework provides an emerging-technology-focused infrastructure for future ethics-approved, inclusive DHM research.