Parameter-Efficient Fine-Tuning for HAR: Integrating LoRA and QLoRA into Transformer Models

πŸ“… 2025-12-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the prohibitively high computational cost of full-parameter fine-tuning large models for Human Activity Recognition (HAR) in resource-constrained settings, this paper proposes a lightweight adaptation framework based on Masked Autoencoders, pioneering the systematic integration of LoRA and QLoRA into HAR. Methodologically, it synergistically combines low-rank adaptation, weight quantization, and a self-supervised backbone, validated via Leave-One-Dataset-Out cross-dataset evaluation. Key contributions include: (1) revealing the tunable trade-off between accuracy and efficiency governed by adaptation rank; (2) achieving performance on par with full fine-tuning across five public HAR benchmarks; and (3) reducing trainable parameters by 93%, GPU memory consumption by 68%, and training time by 57%, while maintaining robustness under low-supervision regimes.

Technology Category

Application Category

πŸ“ Abstract
Human Activity Recognition is a foundational task in pervasive computing. While recent advances in self-supervised learning and transformer-based architectures have significantly improved HAR performance, adapting large pretrained models to new domains remains a practical challenge due to limited computational resources on target devices. This papers investigates parameter-efficient fine-tuning techniques, specifically Low-Rank Adaptation (LoRA) and Quantized LoRA, as scalable alternatives to full model fine-tuning for HAR. We propose an adaptation framework built upon a Masked Autoencoder backbone and evaluate its performance under a Leave-One-Dataset-Out validation protocol across five open HAR datasets. Our experiments demonstrate that both LoRA and QLoRA can match the recognition performance of full fine-tuning while significantly reducing the number of trainable parameters, memory usage, and training time. Further analyses reveal that LoRA maintains robust performance even under limited supervision and that the adapter rank provides a controllable trade-off between accuracy and efficiency. QLoRA extends these benefits by reducing the memory footprint of frozen weights through quantization, with minimal impact on classification quality.
Problem

Research questions and friction points this paper is trying to address.

Adapting large pretrained models to new domains with limited computational resources
Investigating parameter-efficient fine-tuning techniques for Human Activity Recognition
Reducing trainable parameters, memory usage, and training time while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating LoRA and QLoRA into transformer models
Using a Masked Autoencoder backbone for adaptation
Reducing parameters and memory with quantization
πŸ”Ž Similar Papers
No similar papers found.
I
Irina Seregina
Univ. Grenoble Alpes, Grenoble, France
Philippe Lalanda
Philippe Lalanda
LIG/ADELE UniversitΓ© Joseph Fourier at Grenoble
software engineeringautonomic computingpervasive computing
G
German Vega
Univ. Grenoble Alpes, Grenoble, France