🤖 AI Summary
Emerging edge and cloud AI applications demand high-energy-efficiency computing, yet conventional embedded and datacenter architectures struggle to simultaneously achieve high performance and energy efficiency. Method: This work systematically surveys 15 years of approximate computing research, introducing the first full-stack taxonomy—spanning programs, compilers, circuits, accelerators, and memory—along with rigorously defined core terminology and design principles; it further proposes a unified evaluation framework for quantitative, cross-layer trade-off analysis between performance and power consumption. Contribution/Results: The study delivers the first authoritative survey on approximate computing (Part I), addressing a critical gap in systematic, domain-wide reviews. By establishing foundational taxonomies and evaluation methodologies, it provides both theoretical grounding and practical guidance for algorithm–architecture co-optimization, thereby advancing energy-efficient computing for AI workloads.
📝 Abstract
The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, over the last 15 years, the semiconductor industry has established power efficiency as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and power-efficient computing. Among the examined solutions,
Approximate Computing
has attracted an ever-increasing interest, which has resulted in novel approximation techniques for all the layers of the traditional computing stack. More specifically, during the last decade, a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories) have been proposed in the literature. The current article is Part I of a comprehensive survey on Approximate Computing. It reviews its motivation, terminology and principles, as well it classifies the state-of-the-art software & hardware approximation techniques, presents their technical details, and reports a comparative quantitative analysis.