Predicting Encoding Energy from Low-Pass Anchors for Green Video Streaming

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the surging energy consumption and carbon emissions associated with high-resolution video streaming, this paper proposes a lightweight, green-transmission-oriented method for encoding energy prediction and optimization. The core innovation lies in leveraging low-resolution anchor encodings to rapidly estimate high-resolution encoding energy—bypassing costly per-segment empirical measurement—and dynamically co-optimizing resolution and quantization parameter (QP) under VMAF-based perceptual quality constraints. Implemented on VVenC, our multi-resolution energy modeling achieves a 51.22% reduction in encoding energy and a 53.54% reduction in decoding energy, with only a marginal average VMAF drop of 1.68—well below the just-noticeable-difference (JND) threshold. To the best of our knowledge, this is the first approach to deliver large-scale, deployable energy-efficient video encoding while maintaining perceptually acceptable quality, providing a practical and scalable technical pathway toward sustainable video streaming.

Technology Category

Application Category

📝 Abstract
Video streaming now represents the dominant share of Internet traffic, as ever-higher-resolution content is distributed across a growing range of heterogeneous devices to sustain user Quality of Experience (QoE). However, this trend raises significant concerns about energy efficiency and carbon emissions, requiring methods to provide a trade-off between energy and QoE. This paper proposes a lightweight energy prediction method that estimates the energy consumption of high-resolution video encodings using reference encodings generated at lower resolutions (so-called anchors), eliminating the need for exhaustive per-segment energy measurements, a process that is infeasible at scale. We automatically select encoding parameters, such as resolution and quantization parameter (QP), to achieve substantial energy savings while maintaining perceptual quality, as measured by the Video Multimethod Fusion Assessment (VMAF), within acceptable limits. We implement and evaluate our approach with the open-source VVenC encoder on 100 video sequences from the Inter4K dataset across multiple encoding settings. Results show that, for an average VMAF score reduction of only 1.68, which stays below the Just Noticeable Difference (JND) threshold, our method achieves 51.22% encoding energy savings and 53.54% decoding energy savings compared to a scenario with no quality degradation.
Problem

Research questions and friction points this paper is trying to address.

Predicting video encoding energy using low-resolution reference anchors
Balancing energy efficiency with perceptual video quality preservation
Automating encoding parameter selection to reduce carbon emissions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts encoding energy using low-resolution anchor videos
Automatically selects resolution and QP parameters
Achieves over 50% energy savings with minimal quality loss
🔎 Similar Papers
No similar papers found.