๐ค AI Summary
This work addresses the urgent demand for highly energy-efficient and flexible low-precision floating-point multiply-accumulate (MAC) units driven by AI and edge computing applications. The paper proposes a fully pipelined dual-precision floating-point MAC engine supporting FP8 (E4M3/E5M2) and FP4 (E2M1/E1M2) formats. Its key innovation lies in a novel bit-partitioning architecture that enables a single 4-bit multiplier to be dynamically configured as either one 4ร4 or two parallel 2ร2 multipliers, achieving 100% hardware utilization with no logic redundancy. Implemented in 28 nm CMOS technology, the design integrates mixed-precision support and dynamic bit-width reconfiguration, operating at 1.94 GHz while occupying only 0.00396 mmยฒ and consuming 2.13 mWโyielding up to 60.4% area and 86.6% power savings compared to state-of-the-art alternatives.
๐ Abstract
The rapid adoption of low-precision arithmetic in artificial intelligence and edge computing has created a strong demand for energy-efficient and flexible floating-point multiply-accumulate (MAC) units. This paper presents a fully pipelined dual-precision floating-point MAC processing engine supporting FP8 formats (E4M3, E5M2) and FP4 formats (E2M1, E1M2), specifically optimized for low-power and high-throughput AI workloads. The proposed architecture employs a novel bit-partitioning technique that enables a single 4-bit unit multiplier to operate either as a standard 4x4 multiplier for FP8 or as two parallel 2x2 multipliers for 2-bit operands, achieving 100 percent hardware utilization without duplicating logic. Implemented in 28 nm technology, the proposed processing engine achieves an operating frequency of 1.94 GHz with an area of 0.00396 mm^2 and power consumption of 2.13 mW, resulting in up to 60.4 percent area reduction and 86.6 percent power savings compared to state-of-the-art designs.