GS-ProCams: Gaussian Splatting-based Projector-Camera Systems

📅 2024-12-16
🏛️ IEEE Transactions on Visualization and Computer Graphics
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional CNN-based ProCam systems suffer from limited viewpoint support, while NeRF-based approaches, though viewpoint-agnostic, require co-located light sources and incur prohibitive computational and memory overhead. This paper introduces the first ProCam framework built upon 2D Gaussian Splatting, pioneering the use of 2D Gaussians as a unified scene representation for projector-camera system modeling—jointly encoding geometry, photometry, and global illumination. Our method eliminates the need for co-located lighting, enables arbitrary-viewpoint projection mapping, and achieves efficient optimization via differentiable physical rendering coupled with projector-response modeling. It delivers microsecond-scale inference and minimal GPU memory consumption: training memory is reduced to 1/10 that of NeRF, inference is accelerated by 900×, and projection simulation quality improves—all without specialized hardware.

Technology Category

Application Category

📝 Abstract
We present GS-ProCams, the first Gaussian Splatting-based framework for projector-camera systems (ProCams). GSProCams is not only view-agnostic but also significantly enhances the efficiency of projection mapping (PM) that requires establishing geometric and radiometric mappings between the projector and the camera. Previous CNN-based ProCams are constrained to a specific viewpoint, limiting their applicability to novel perspectives. In contrast, NeRF-based ProCams support view-agnostic projection mapping, however, they require an additional co-located light source and demand significant computational and memory resources. To address this issue, we propose GS-ProCams that employs 2D Gaussian for scene representations, and enables efficient view-agnostic ProCams applications. In particular, we explicitly model the complex geometric and photometric mappings of ProCams using projector responses, the projection surface's geometry and materials represented by Gaussians, and the global illumination component. Then, we employ differentiable physically-based rendering to jointly estimate them from captured multi-view projections. Compared to state-of-the-art NeRF-based methods, our GS-ProCams eliminates the need for additional devices, achieving superior ProCams simulation quality. It also uses only 1/10 of the GPU memory for training and is 900 times faster in inference speed. Please refer to our project page for the code and dataset: https://realqingyue.github.io/GS-ProCams/.
Problem

Research questions and friction points this paper is trying to address.

Enables view-agnostic projection mapping without viewpoint constraints
Enhances efficiency of geometric and radiometric mapping in ProCams
Reduces computational and memory demands compared to NeRF-based methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 2D Gaussian for scene representations
Models ProCams mappings with differentiable rendering
Achieves high efficiency without additional devices
🔎 Similar Papers
No similar papers found.