Choosing a Model, Shaping a Future: Comparing LLM Perspectives on Sustainability and its Relationship with AI

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies systematic value biases and institutional responsibility attribution biases in large language models (LLMs) regarding sustainability and AI’s societal role, aiming to inform organizational model selection under governance frameworks. Methodologically, it pioneers the use of standardized psychometric questionnaires—administered 100 times per model across five major LLMs (Claude, DeepSeek, GPT, LLaMA, Mistral)—combined with SDG-aligned quantitative evaluation and cross-model statistical comparison. Results reveal pronounced ideological divergence: GPT exhibits skepticism toward technological solutions for sustainability, whereas LLaMA displays extreme techno-optimism. The study empirically confirms that model choice significantly shapes sustainability-related decision outputs. It further identifies two governance-critical dimensions: “AI–sustainability compatibility cognition” and “responsibility attribution locus,” establishing an evidence-based foundation and methodological innovation for AI sustainability governance.

Technology Category

Application Category

📝 Abstract
As organizations increasingly rely on AI systems for decision support in sustainability contexts, it becomes critical to understand the inherent biases and perspectives embedded in Large Language Models (LLMs). This study systematically investigates how five state-of-the-art LLMs -- Claude, DeepSeek, GPT, LLaMA, and Mistral - conceptualize sustainability and its relationship with AI. We administered validated, psychometric sustainability-related questionnaires - each 100 times per model -- to capture response patterns and variability. Our findings revealed significant inter-model differences: For example, GPT exhibited skepticism about the compatibility of AI and sustainability, whereas LLaMA demonstrated extreme techno-optimism with perfect scores for several Sustainable Development Goals (SDGs). Models also diverged in attributing institutional responsibility for AI and sustainability integration, a results that holds implications for technology governance approaches. Our results demonstrate that model selection could substantially influence organizational sustainability strategies, highlighting the need for awareness of model-specific biases when deploying LLMs for sustainability-related decision-making.
Problem

Research questions and friction points this paper is trying to address.

Investigating biases in LLMs regarding sustainability and AI relationships
Comparing how different LLMs conceptualize sustainability and AI integration
Assessing impact of model selection on organizational sustainability strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically compare five state-of-the-art LLMs
Use validated psychometric sustainability questionnaires
Analyze model-specific biases in sustainability perspectives
🔎 Similar Papers
No similar papers found.