🤖 AI Summary
Existing activity log generation methods suffer from limitations in accuracy, computational efficiency, and semantic expressiveness. This paper proposes a lightweight multimodal log generation framework that, for the first time, jointly models user activities using four-dimensional sensor data—location, motion, environmental, and physiological signals—fused from smartphones and smartwatches. The framework integrates structured prompt engineering with a 1.5B-parameter large language model to enable end-to-end real-time feature extraction and natural-language log generation. Its core innovations include four-dimensional context-aware modeling and a low-parameter, efficient inference design. Experimental results demonstrate a 17% improvement in BERTScore over prior work and inference speed nearly ten times faster than state-of-the-art methods. Moreover, the framework is deployable on resource-constrained edge devices, including PCs and Raspberry Pi platforms.
📝 Abstract
Rich and context-aware activity logs facilitate user behavior analysis and health monitoring, making them a key research focus in ubiquitous computing. The remarkable semantic understanding and generation capabilities of Large Language Models (LLMs) have recently created new opportunities for activity log generation. However, existing methods continue to exhibit notable limitations in terms of accuracy, efficiency, and semantic richness. To address these challenges, we propose DailyLLM. To the best of our knowledge, this is the first log generation and summarization system that comprehensively integrates contextual activity information across four dimensions: location, motion, environment, and physiology, using only sensors commonly available on smartphones and smartwatches. To achieve this, DailyLLM introduces a lightweight LLM-based framework that integrates structured prompting with efficient feature extraction to enable high-level activity understanding. Extensive experiments demonstrate that DailyLLM outperforms state-of-the-art (SOTA) log generation methods and can be efficiently deployed on personal computers and Raspberry Pi. Utilizing only a 1.5B-parameter LLM model, DailyLLM achieves a 17% improvement in log generation BERTScore precision compared to the 70B-parameter SOTA baseline, while delivering nearly 10x faster inference speed.