🤖 AI Summary
To address high GPU rendering overhead and interactive latency in real-time visualization of large-scale, high-resolution volumetric data, this paper proposes an importance mask learning and compositing network. Our method jointly models data characteristics, viewpoint parameters, and reconstruction network responses to dynamically generate sparse pixel-wise importance masks—selectively rendering only regions critical to final reconstruction fidelity. We introduce novel differentiable compression/decompression layers that enable end-to-end lightweight adaptation of existing pre-trained reconstruction networks without retraining. By integrating differentiable rendering, importance sampling, and image inpainting techniques, our approach significantly reduces per-pixel rendering load. Experiments demonstrate that our method maintains reconstruction accuracy while achieving substantial reductions in measured rendering latency—enabling interactive, real-time visualization of scientific volumetric datasets.
📝 Abstract
Visualizing a large-scale volumetric dataset with high resolution is challenging due to the high computational time and space complexity. Recent deep-learning-based image inpainting methods significantly improve rendering latency by reconstructing a high-resolution image for visualization in constant time on GPU from a partially rendered image where only a small portion of pixels go through the expensive rendering pipeline. However, existing methods need to render every pixel of a predefined regular sampling pattern. In this work, we provide Importance Mask Learning (IML) and Synthesis (IMS) networks which are the first attempts to learn importance regions from the sampling pattern to further minimize the number of pixels to render by jointly considering the dataset, user's view parameters, and the downstream reconstruction neural networks. Our solution is a unified framework to handle various image inpainting-based visualization methods through the proposed differentiable compaction/decompaction layers. Experiments show our method can further improve the overall rendering latency of state-of-the-art volume visualization methods using reconstruction neural network for free when rendering scientific volumetric datasets. Our method can also directly optimize the off-the-shelf pre-trained reconstruction neural networks without elongated retraining.