🤖 AI Summary
To address two critical bottlenecks—excessive rasterization redundancy and high sorting overhead—in real-time 3D Gaussian Splatting (3DGS) rendering on resource-constrained devices, this paper proposes a hardware–algorithm co-design framework. First, it introduces axial rasterization to precompute shared geometric terms, significantly reducing redundant computation. Second, it replaces conventional hardware sorters with a lightweight neural sorting network that predicts compositing weights in an order-agnostic manner. Third, it proposes the π-trajectory tiling scheduling strategy, integrating Morton coding and Hilbert curve ordering to maximize Gaussian reuse across tiles. The resulting reconfigurable processing array, implemented on edge devices, achieves 23.4×–27.8× speedup over GPU-based acceleration and reduces energy consumption by 28.8×–51.4×, while preserving pixel-level rendering fidelity.
📝 Abstract
3D Gaussian Splatting (3DGS) has recently gained significant attention for high-quality and efficient view synthesis, making it widely adopted in fields such as AR/VR, robotics, and autonomous driving. Despite its impressive algorithmic performance, real-time rendering on resource-constrained devices remains a major challenge due to tight power and area budgets. This paper presents an architecture-algorithm co-design to address these inefficiencies. First, we reveal substantial redundancy caused by repeated computation of common terms/expressions during the conventional rasterization. To resolve this, we propose axis-oriented rasterization, which pre-computes and reuses shared terms along both the X and Y axes through a dedicated hardware design, effectively reducing multiply-and-add (MAC) operations by up to 63%. Second, by identifying the resource and performance inefficiency of the sorting process, we introduce a novel neural sorting approach that predicts order-independent blending weights using an efficient neural network, eliminating the need for costly hardware sorters. A dedicated training framework is also proposed to improve its algorithmic stability. Third, to uniformly support rasterization and neural network inference, we design an efficient reconfigurable processing array that maximizes hardware utilization and throughput. Furthermore, we introduce a $pi$-trajectory tile schedule, inspired by Morton encoding and Hilbert curve, to optimize Gaussian reuse and reduce memory access overhead. Comprehensive experiments demonstrate that the proposed design preserves rendering quality while achieving a speedup of $23.4sim27.8 imes$ and energy savings of $28.8sim51.4 imes$ compared to edge GPUs for real-world scenes. We plan to open-source our design to foster further development in this field.