🤖 AI Summary
Light field displays suffer from limited angular sampling, resulting in insufficient angular/spatial resolution and high-quality 3D rendering only within a narrow depth of field (DoF); visual quality degrades significantly outside this range due to severe aliasing artifacts, while conventional DoF rendering often sacrifices fine details. To address this, we propose the first Depth-of-Field-Aware Scene Complexity (DASC) metric, which jointly encodes geometric structure and spatial position to quantitatively characterize scene complexity under DoF constraints. We further establish a predictive model linking DASC to human-preferred blur levels, enabling content-adaptive optimal blur rendering. Our approach integrates light field property modeling, multi-scene geometric analysis, psychophysical experiments, and regression-based modeling. Evaluation demonstrates that the method significantly improves visual preference while effectively balancing detail preservation and aliasing suppression.
📝 Abstract
Light field display is one of the technologies providing 3D immersive visualization. However, a light field display generates only a limited number of light rays which results in finite angular and spatial resolutions. Therefore, 3D content can be shown with high quality only within a narrow depth range notated as Depth of Field (DoF) around the display screen. Outside this range, due to the appearance of aliasing artifacts, the quality degrades proportionally to the distance from the screen. One solution to mitigate the artifacts is depth of field rendering which blurs the content in the distorted regions, but can result in the removal of scene details. This research focuses on proposing a DoF Aware Scene Complexity (DASC) metric that characterizes 3D content based on geometrical and positional factors considering the light field display's DoF. In this research, we also evaluate the observers' preference across different level of blurriness caused by DoF rendering ranging from sharp, aliased scenes to overly smoothed alias-free scenes. We have conducted this study over multiple scenes that we created to account for different types of content. Based on the outcome of subjective studies, we propose a model that takes the value of DASC metric as input and predicts the preferred level of blurring for the given scene as output.