The Environmental Impacts of Machine Learning Training Keep Rising Evidencing Rebound Effect

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite advances in hardware efficiency, algorithmic optimization, and carbon-aware scheduling, large AI models—particularly LLMs—have exhibited exponential growth in energy consumption and carbon emissions over the past decade, revealing a significant environmental rebound effect in ML training. Method: This study conducts the first systematic life-cycle assessment (LCA) of GPUs used in AI training, quantifying carbon footprints across manufacturing, transportation, operational use, and electricity grid dependencies—including geographic variability in power generation mixes. Contribution/Results: We find that upstream emissions from GPU production have risen steadily, partially offsetting gains achieved during operation. Consequently, efficiency improvements alone are insufficient for sustainability. Achieving net decarbonization requires expanding evaluation boundaries beyond operational-phase energy use to full life-cycle accounting and implementing deliberate constraints on the scale and frequency of resource-intensive training runs. This work establishes a rigorous methodological foundation and provides actionable policy insights for green AI development.

Technology Category

Application Category

📝 Abstract
Recent Machine Learning (ML) approaches have shown increased performance on benchmarks but at the cost of escalating computational demands. Hardware, algorithmic and carbon optimizations have been proposed to curb energy consumption and environmental impacts. Can these strategies lead to sustainable ML model training? Here, we estimate the environmental impacts associated with training notable AI systems over the last decade, including Large Language Models, with a focus on the life cycle of graphics cards. Our analysis reveals two critical trends: First, the impacts of graphics cards production have increased steadily over this period; Second, energy consumption and environmental impacts associated with training ML models have increased exponentially, even when considering reduction strategies such as location shifting to places with less carbon intensive electricity mixes. Optimization strategies do not mitigate the impacts induced by model training, evidencing rebound effect. We show that the impacts of hardware must be considered over the entire life cycle rather than the sole use phase in order to avoid impact shifting. Our study demonstrates that increasing efficiency alone cannot ensure sustainability in ML. Mitigating the environmental impact of AI also requires reducing AI activities and questioning the scale and frequency of resource-intensive training.
Problem

Research questions and friction points this paper is trying to address.

Investigating rising environmental impacts of ML training despite optimization efforts
Analyzing life cycle impacts of graphics cards in AI model training
Demonstrating efficiency gains alone cannot ensure sustainable machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware life cycle analysis beyond use phase
Optimization strategies failing to mitigate rebound effects
Reducing AI activities and training frequency
🔎 Similar Papers
No similar papers found.