🤖 AI Summary
This work addresses the formal verification of vision-based autonomous systems, tackling the rigorous propagation of uncertainty under continuous camera pose variations during scene rendering—particularly challenging for Gaussian splatting due to non-differentiable operations such as matrix inversion and depth sorting, which hinder precision-efficiency trade-offs. We propose the first linear-relational abstraction method tailored for Gaussian splatting, integrating piecewise-linear bounding, tile- and batch-level parallelism, and memory-aware scheduling to construct a scalable, block-wise abstract rendering framework. Our approach enables sound uncertainty quantification over scenes with up to 750K Gaussians under continuous translation, rotation, and scene perturbations. Compared to state-of-the-art mesh-based abstraction, it achieves 2–14× speedup with zero precision loss, marking the first solution capable of real-time, reliable abstract rendering for large-scale Gaussian scenes.
📝 Abstract
We introduce abstract rendering, a method for computing a set of images by rendering a scene from a continuously varying range of camera positions. The resulting abstract image-which encodes an infinite collection of possible renderings-is represented using constraints on the image matrix, enabling rigorous uncertainty propagation through the rendering process. This capability is particularly valuable for the formal verification of vision-based autonomous systems and other safety-critical applications. Our approach operates on Gaussian splat scenes, an emerging representation in computer vision and robotics. We leverage efficient piecewise linear bound propagation to abstract fundamental rendering operations, while addressing key challenges that arise in matrix inversion and depth sorting-two operations not directly amenable to standard approximations. To handle these, we develop novel linear relational abstractions that maintain precision while ensuring computational efficiency. These abstractions not only power our abstract rendering algorithm but also provide broadly applicable tools for other rendering problems. Our implementation, AbstractSplat, is optimized for scalability, handling up to 750k Gaussians while allowing users to balance memory and runtime through tile and batch-based computation. Compared to the only existing abstract image method for mesh-based scenes, AbstractSplat achieves 2-14x speedups while preserving precision. Our results demonstrate that continuous camera motion, rotations, and scene variations can be rigorously analyzed at scale, making abstract rendering a powerful tool for uncertainty-aware vision applications.