From User Preferences to Optimization Constraints Using Large Language Models

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenge of automatically translating users’ natural-language preferences—elicited in Italian renewable energy communities (RECs)—into formal constraints for smart-home energy optimization. Method: We propose the first LLM-driven preference-to-constraint translation framework and introduce, to our knowledge, the first open-source Italian benchmark dataset of aligned preference-constraint pairs, alongside reference implementation code. Using zero-shot, one-shot, and few-shot prompting strategies, we systematically evaluate multiple Italian-capable LLMs—including GPT-4 and Claude series—on constraint generation performance. Contribution/Results: Experiments demonstrate that state-of-the-art Italian LLMs possess foundational capability for this task, with GPT-4 and Claude models significantly outperforming others in both accuracy and robustness. The study also identifies key domain-specific adaptation challenges and practical limitations. Our work establishes a reproducible empirical paradigm and foundational infrastructure for LLM-augmented human-AI collaboration in energy system optimization.

Technology Category

Application Category

📝 Abstract
This work explores using Large Language Models (LLMs) to translate user preferences into energy optimization constraints for home appliances. We describe a task where natural language user utterances are converted into formal constraints for smart appliances, within the broader context of a renewable energy community (REC) and in the Italian scenario. We evaluate the effectiveness of various LLMs currently available for Italian in translating these preferences resorting to classical zero-shot, one-shot, and few-shot learning settings, using a pilot dataset of Italian user requests paired with corresponding formal constraint representation. Our contributions include establishing a baseline performance for this task, publicly releasing the dataset and code for further research, and providing insights on observed best practices and limitations of LLMs in this particular domain
Problem

Research questions and friction points this paper is trying to address.

Translate user preferences into energy optimization constraints
Evaluate LLMs for Italian natural language processing
Establish baseline performance for constraint conversion
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs translate user preferences into constraints
Zero-shot, one-shot, few-shot learning evaluated
Dataset and code released for research
🔎 Similar Papers
No similar papers found.