Prompting Large Language Models for Training-Free Non-Intrusive Load Monitoring

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-intrusive load monitoring (NILM) faces key challenges including heavy reliance on labeled data, poor cross-user generalization, and limited interpretability. To address these, this work introduces the first large language model (LLM)-based NILM framework—eliminating the need for model training or labeled data through prompt learning. Leveraging in-context learning, our approach constructs structured prompts by integrating appliance-specific features, temporal stamps, and representative time-series examples, enabling precise disaggregation of aggregate power consumption into device-level usage. The method achieves strong cross-user generalization and yields human-readable, step-by-step reasoning traces. Evaluated on unseen users, it attains an average F1-score of 0.676—significantly reducing data dependency while enhancing deployment flexibility and decision-making transparency.

Technology Category

Application Category

📝 Abstract
Non-intrusive Load Monitoring (NILM) aims to disaggregate aggregate household electricity consumption into individual appliance usage, enabling more effective energy management. While deep learning has advanced NILM, it remains limited by its dependence on labeled data, restricted generalization, and lack of interpretability. In this paper, we introduce the first prompt-based NILM framework that leverages Large Language Models (LLMs) with in-context learning. We design and evaluate prompt strategies that integrate appliance features, timestamps and contextual information, as well as representative time-series examples, using the REDD dataset. With optimized prompts, LLMs achieve competitive state detection accuracy, reaching an average F1-score of 0.676 on unseen households, and demonstrate robust generalization without the need for fine-tuning. LLMs also enhance interpretability by providing clear, human-readable explanations for their predictions. Our results show that LLMs can reduce data requirements, improve adaptability, and provide transparent energy disaggregation in NILM applications.
Problem

Research questions and friction points this paper is trying to address.

Develops a training-free NILM method using LLMs to avoid labeled data dependency
Enhances NILM generalization and interpretability via prompt-based LLM frameworks
Reduces data needs and improves adaptability in energy disaggregation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt-based NILM framework using LLMs
Optimized prompts integrate appliance features and examples
LLMs achieve accuracy without fine-tuning
🔎 Similar Papers
No similar papers found.
J
Junyu Xue
Southern University of Science and Technology & Peng Cheng Laboratory
X
Xudong Wang
The Chinese University of Hong Kong, Shenzhen
X
Xiaoling He
The Hong Kong University of Science and Technology (Guangzhou)
Shicheng Liu
Shicheng Liu
CS PhD Candidate, Stanford University
Natural Language ProcessingProgramming Langauges & Systems
Y
Yi Wang
Southern University of Science and Technology & Peng Cheng Laboratory
Guoming Tang
Guoming Tang
The Hong Kong University of Science and Technology (Guangzhou)
Sustainable Computing/AICloud/Edge ComputingAI4Sus