🤖 AI Summary
This paper addresses the greenhouse gas emissions across the full life cycle of AI hardware. We conduct the first systematic life cycle assessment (LCA) of five generations of Google TPUs (v1–v6e), covering raw material extraction, manufacturing, operational phases (training and inference), and end-of-life disposal. Innovatively, we incorporate empirically measured carbon emissions from AI chip fabrication and propose the Chip Carbon Intensity (CCI) metric—revealing a threefold improvement in CCI from TPU v4i to v6e. We establish a reusable LCA methodology integrating empirical energy-efficiency measurements, cross-generational architectural comparison, and carbon footprint modeling. Furthermore, we release the most comprehensive publicly available AI hardware carbon emissions benchmark dataset to date. This work advances a software–hardware co-design paradigm for carbon-aware AI systems and provides engineers with both methodological guidance and empirical foundations for green AI hardware development.
📝 Abstract
Specialized hardware accelerators aid the rapid advancement of artificial intelligence (AI), and their efficiency impacts AI's environmental sustainability. This study presents the first publication of a comprehensive AI accelerator life-cycle assessment (LCA) of greenhouse gas emissions, including the first publication of manufacturing emissions of an AI accelerator. Our analysis of five Tensor Processing Units (TPUs) encompasses all stages of the hardware lifespan - from raw material extraction, manufacturing, and disposal, to energy consumption during development, deployment, and serving of AI models. Using first-party data, it offers the most comprehensive evaluation to date of AI hardware's environmental impact. We include detailed descriptions of our LCA to act as a tutorial, road map, and inspiration for other computer engineers to perform similar LCAs to help us all understand the environmental impacts of our chips and of AI. A byproduct of this study is the new metric compute carbon intensity (CCI) that is helpful in evaluating AI hardware sustainability and in estimating the carbon footprint of training and inference. This study shows that CCI improves 3x from TPU v4i to TPU v6e. Moreover, while this paper's focus is on hardware, software advancements leverage and amplify these gains.