🤖 AI Summary
This study addresses the lack of publicly available evaluation benchmarks and standardized metrics for structured data extraction in building energy modeling (BEM), which has hindered systematic assessment and deployment of large language models (LLMs). To bridge this gap, we propose BEMEval-Doc2Schema, the first BEM-specific evaluation framework, integrating multiple real-world datasets—including HERS L100, NREL iUnit, and NIST NZERTF—and introducing the Key-Value Overlap Rate (KVOR) as a core evaluation metric. We conduct comprehensive zero-shot and few-shot evaluations of leading LLMs such as GPT-5 and Gemini 2.5. Results demonstrate that Gemini 2.5 consistently outperforms GPT-5, and few-shot prompting substantially enhances extraction performance. Moreover, the simpler EPC schema achieves higher KVOR scores than the more complex HPXML, highlighting the critical impact of schema complexity on extraction efficacy. This work advances standardization and reproducibility in AI-assisted BEM research.
📝 Abstract
Recent advances in foundation models, including large language models (LLMs), have created new opportunities to automate building energy modeling (BEM). However, systematic evaluation has remained challenging due to the absence of publicly available, task-specific datasets and standardized performance metrics. We present BEMEval, a benchmark framework designed to assess foundation models' performance across BEM tasks. The first benchmark in this suite, BEMEval-Doc2Schema, focuses on structured data extraction from building documentation, a foundational step toward automated BEM processes. BEMEval-Doc2Schema introduces the Key-Value Overlap Rate (KVOR), a metric that quantifies the alignment between LLM-generated structured outputs and ground-truth schema references. Using this framework, we evaluate two leading models (GPT-5 and Gemini 2.5) under zero-shot and few-shot prompting strategies across three datasets: HERS L100, NREL iUnit, and NIST NZERTF. Results show that Gemini 2.5 consistently outperforms GPT-5, and that few-shot prompts improve accuracy for both models. Performance also varies by schema: the EPC schema yields significantly higher KVOR scores than HPXML, reflecting its simpler and reduced hierarchical depth. By combining curated datasets, reproducible metrics, and cross-model comparisons, BEMEval-Doc2Schema establishes the first community-driven benchmark for evaluating LLMs in performing building energy modeling tasks, laying the groundwork for future research on AI-assisted BEM workflows.