π€ AI Summary
Understanding complex human daily activities in real-world settings demands fine-grained, semantically rich, and hierarchically structured representations.
Method: We propose a novel multimodal, hierarchical annotation paradigm and introduce DARaiβa large-scale dataset comprising 200+ hours of continuous, synchronized recordings from 50 participants across 20 modalities (RGB, depth, radar, IMU, EMG, pressure, biosignals, eye tracking, etc.). DARai features the first shared semantic annotation scheme spanning three levels: high-level tasks, mid-level actions (22.7% cross-category reuse), and low-level steps (14.2% cross-category reuse). Our method integrates hierarchical supervised learning, temporal action localization, future action prediction, and cross-modal contrastive training, complemented by a domain-variation evaluation framework.
Contribution/Results: Experiments demonstrate significant gains from multimodal fusion across all three understanding levels and expose fundamental limitations of single-sensor approaches. DARai establishes the first reproducible, human-centered AI benchmark for hierarchical activity understanding; code, documentation, and the full dataset are publicly released.
π Abstract
Daily Activity Recordings for Artificial Intelligence (DARai, pronounced"Dahr-ree") is a multimodal, hierarchically annotated dataset constructed to understand human activities in real-world settings. DARai consists of continuous scripted and unscripted recordings of 50 participants in 10 different environments, totaling over 200 hours of data from 20 sensors including multiple camera views, depth and radar sensors, wearable inertial measurement units (IMUs), electromyography (EMG), insole pressure sensors, biomonitor sensors, and gaze tracker. To capture the complexity in human activities, DARai is annotated at three levels of hierarchy: (i) high-level activities (L1) that are independent tasks, (ii) lower-level actions (L2) that are patterns shared between activities, and (iii) fine-grained procedures (L3) that detail the exact execution steps for actions. The dataset annotations and recordings are designed so that 22.7% of L2 actions are shared between L1 activities and 14.2% of L3 procedures are shared between L2 actions. The overlap and unscripted nature of DARai allows counterfactual activities in the dataset. Experiments with various machine learning models showcase the value of DARai in uncovering important challenges in human-centered applications. Specifically, we conduct unimodal and multimodal sensor fusion experiments for recognition, temporal localization, and future action anticipation across all hierarchical annotation levels. To highlight the limitations of individual sensors, we also conduct domain-variant experiments that are enabled by DARai's multi-sensor and counterfactual activity design setup. The code, documentation, and dataset are available at the dedicated DARai website: https://alregib.ece.gatech.edu/software-and-datasets/darai-daily-activity-recordings-for-artificial-intelligence-and-machine-learning/