LLM-Guided Exemplar Selection for Few-Shot Wearable-Sensor Human Activity Recognition

πŸ“… 2025-12-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the bottlenecks in few-shot wearable-sensor-based human activity recognition (HAR)β€”namely, heavy reliance on large-scale labeled data and poor discriminability among semantically similar activities (e.g., walking, upstairs, downstairs)β€”this paper proposes a cognition-driven, semantic-enhanced exemplar selection method. It introduces, for the first time in HAR, semantic priors generated by large language models (LLMs), including feature importance, inter-class confusion scores, and sample budget adjustment factors, to guide exemplar selection by jointly leveraging semantic, structural, and geometric cues. The method integrates LLM knowledge distillation, margin-based validation, PageRank centrality, hubness-aware penalization, and facility-location optimization. Evaluated under the few-shot setting of UCI-HAR, it achieves a macro-F1 score of 88.78%, significantly outperforming baselines including random sampling, herding, and k-center selection. This work establishes a novel, interpretable, and highly discriminative paradigm for exemplar selection in low-resource HAR.

Technology Category

Application Category

πŸ“ Abstract
In this paper, we propose an LLM-Guided Exemplar Selection framework to address a key limitation in state-of-the-art Human Activity Recognition (HAR) methods: their reliance on large labeled datasets and purely geometric exemplar selection, which often fail to distinguish similar weara-ble sensor activities such as walking, walking upstairs, and walking downstairs. Our method incorporates semantic reasoning via an LLM-generated knowledge prior that captures feature importance, inter-class confusability, and exemplar budget multipliers, and uses it to guide exemplar scoring and selection. These priors are combined with margin-based validation cues, PageRank centrality, hubness penalization, and facility-location optimization to obtain a compact and informative set of exemplars. Evaluated on the UCI-HAR dataset under strict few-shot conditions, the framework achieves a macro F1-score of 88.78%, outperforming classical approaches such as random sampling, herding, and $k$-center. The results show that LLM-derived semantic priors, when integrated with structural and geometric cues, provide a stronger foundation for selecting representative sensor exemplars in few-shot wearable-sensor HAR.
Problem

Research questions and friction points this paper is trying to address.

Selects representative exemplars for few-shot wearable-sensor activity recognition
Addresses reliance on large labeled datasets and geometric-only selection methods
Distinguishes similar activities like walking upstairs and downstairs using semantic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided semantic reasoning for exemplar selection
Integration of margin-based validation with PageRank centrality
Facility-location optimization for compact informative exemplars
πŸ”Ž Similar Papers
No similar papers found.
E
Elsen Ronando
Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Kitakyushu, Japan
Sozo Inoue
Sozo Inoue
Kyushu Institute of Technology, Japan
Ubiquitous computing / pervasive healthcare / activity recognition / smart life care