🤖 AI Summary
Rising AI inference energy consumption clashes with the spatiotemporal intermittency of renewable sources—particularly wind power—leading to low green electricity utilization and high carbon emissions from computing infrastructure. To address this, we propose the “co-located wind-farm green inference” paradigm. We design Heron, a cross-wind-farm AI workload routing framework integrating wind complementarity modeling, power-aware dynamic scheduling, and geographically distributed redundancy. Additionally, we introduce a modular datacenter scaling strategy enabling economically viable integration of over 6 million GPUs directly with on-site green power. Evaluated using real-world wind time-series data and Azure production traffic, our approach achieves up to an 80% improvement in aggregate effective AI throughput over state-of-the-art methods within a one-week evaluation window—marking the first demonstration of spatiotemporal co-alignment between compute workloads and volatile green electricity supply.
📝 Abstract
AI power demand is growing unprecedentedly thanks to the high power density of AI compute and the emerging inferencing workload. On the supply side, abundant wind power is waiting for grid access in interconnection queues. In this light, this paper argues bringing AI workload to modular compute clusters co-located in wind farms. Our deployment right-sizing strategy makes it economically viable to deploy more than 6 million high-end GPUs today that could consume cheap, green power at its source. We built Heron, a cross-site software router, that could efficiently leverage the complementarity of power generation across wind farms by routing AI inferencing workload around power drops. Using 1-week ofcoding and conversation production traces from Azure and (real) variable wind power traces, we show how Heron improves aggregate goodput of AI compute by up to 80% compared to the state-of-the-art.