🤖 AI Summary
Electronic AI accelerators offer high precision but suffer from poor energy efficiency, whereas photonic accelerators provide high speed yet lack configurability—limiting their applicability in deep neural network (DNN) inference. Method: This paper proposes a heterogeneous-aware, multi-objective DNN mapping framework tailored for electronic–photonic Processing-in-Memory (PIM) hybrid architectures. It introduces a novel two-stage, multi-objective exploration algorithm that dynamically decomposes DNN layers, orchestrates cross-modal resource allocation, and jointly optimizes accuracy, latency, and energy efficiency—breaking away from static weight-mapping paradigms. Contribution/Results: We present the first electronic–photonic PIM system model supporting adaptive workload partitioning and design a dedicated heterogeneous mapping optimization mechanism. Experiments on language and vision models demonstrate 2.74× higher energy efficiency and 3.47× lower latency compared to homogeneous architectures and naive mapping strategies.
📝 Abstract
The future of artificial intelligence (AI) acceleration demands a paradigm shift beyond the limitations of purely electronic or photonic architectures. Photonic analog computing delivers unmatched speed and parallelism but struggles with data movement, robustness, and precision. Electronic processing-in-memory (PIM) enables energy-efficient computing by co-locating storage and computation but suffers from endurance and reconfiguration constraints, limiting it to static weight mapping. Neither approach alone achieves the balance needed for adaptive, efficient AI. To break this impasse, we study a hybrid electronic-photonic-PIM computing architecture and introduce H3PIMAP, a heterogeneity-aware mapping framework that seamlessly orchestrates workloads across electronic and optical tiers. By optimizing workload partitioning through a two-stage multi-objective exploration method, H3PIMAP harnesses light speed for high-throughput operations and PIM efficiency for memory-bound tasks. System-level evaluations on language and vision models show H3PIMAP achieves a 2.74x energy efficiency improvement and a 3.47x latency reduction compared to homogeneous architectures and naive mapping strategies. This proposed framework lays the foundation for hybrid AI accelerators, bridging the gap between electronic and photonic computation for next-generation efficiency and scalability.