🤖 AI Summary
To address scalability bottlenecks—namely large footprint, high electro-optic interface cost, and complex control—in photonic integrated circuits (PICs) for general matrix multiplication (GEMM), this work proposes the Circulant Photonic Tensor Core (CirPTC), a block-circulant structured photonic accelerator tailored for structured-compression photonic neural networks. Its core innovation lies in the first hardware-efficient block-circulant photonic tensor core, jointly optimized via structure-aware weight compression and hardware-in-the-loop training to preserve representational capacity while compensating for on-chip non-idealities. Experiments demonstrate a 74.91% parameter reduction, a computational density of 5.84 TOPS/mm², and an energy efficiency of 47.94 TOPS/W—6.87× higher than the baseline—without significant accuracy degradation in image classification. CirPTC thus substantially advances the efficiency and scalability limits of optical GEMM.
📝 Abstract
Recent advancements in artificial intelligence (AI) and deep neural networks (DNNs) have revolutionized numerous fields, enabling complex tasks by extracting intricate features from large datasets. However, the exponential growth in computational demands has outstripped the capabilities of traditional electrical hardware accelerators. Optical computing offers a promising alternative due to its inherent advantages of parallelism, high computational speed, and low power consumption. Yet, current photonic integrated circuits (PICs) designed for general matrix multiplication (GEMM) are constrained by large footprints, high costs of electro-optical (E-O) interfaces, and high control complexity, limiting their scalability. To overcome these challenges, we introduce a block-circulant photonic tensor core (CirPTC) for a structure-compressed optical neural network (StrC-ONN) architecture. By applying a structured compression strategy to weight matrices, StrC-ONN significantly reduces model parameters and hardware requirements while preserving the universal representability of networks and maintaining comparable expressivity. Additionally, we propose a hardware-aware training framework to compensate for on-chip nonidealities to improve model robustness and accuracy. We experimentally demonstrate image processing and classification tasks, achieving up to a 74.91% reduction in trainable parameters while maintaining competitive accuracies. Performance analysis expects a computational density of 5.84 tera operations per second (TOPS) per mm^2 and a power efficiency of 47.94 TOPS/W, marking a 6.87-times improvement achieved through the hardware-software co-design approach. By reducing both hardware requirements and control complexity across multiple dimensions, this work explores a new pathway to push the limits of optical computing in the pursuit of high efficiency and scalability.