π€ AI Summary
This study investigates how large language models balance structural correctness and environmental sustainability when generating structured outputs, with a focus on the emerging TOON format. To this end, the authors propose the first evaluation framework that jointly considers structural fidelity and carbon emissions, introducing an environment-aware Generation Correctness Score (GCS_env) derived from multi-model benchmarking, token-level resource tracking, and carbon footprint estimation. Experimental results demonstrate that TOON substantially reduces output size and associated carbon emissions; however, its structural correctness heavily depends on native model support. The correctness gap diminishes as model scale increases, and notably, GCS_env can invert conventional rankings of output formats, highlighting the sustainability advantages of compact formats like TOON in large-scale deployments.
π Abstract
Large Language Models (LLMs) are increasingly required to generate structured, machine-readable outputs for downstream systems. While recent benchmarks have focused on evaluating the structural correctness of such outputs, the environmental impact of inference for different output formats has largely been overlooked. In this paper, we argue that structured output formats should be assessed not only in terms of correctness, but also with respect to their environmental efficiency. To this end, we introduce a sustainability-aware evaluation framework for structured generation that measures token usage, generation time, and estimated carbon emissions. Within this framework, we propose the Environment-Aware Generation Correctness Score (GCS_env), a unified metric that integrates structural correctness with carbon-aware efficiency. Using this framework, we systematically benchmark the novel TOON format against established representations (JSON, XML, YAML) across multiple LLMs spanning different architectures and parameter scales. Our results reveal a consistent trade-off: TOON yields markedly more compact outputs and lower emissions, but lower structural correctness when models lack native support. We show that increased model capacity reduces this gap and that environment-aware scoring can shift format rankings depending on deployment priorities. highlighting the need for sustainability-inclusive benchmarking and provides empirical evidence that compact representations such as TOON can offer practical advantages in large-scale, carbon-conscious LLM deployments.