🤖 AI Summary
Conventional display multi-view characterization requires specialized optical equipment and controlled darkroom environments, posing significant practical barriers. To address this, we propose a hardware- and darkroom-free calibration paradigm. Methodologically, we introduce the first synergistic integration of lensless imaging and implicit neural representations (INRs): a learnable optical encoding/decoding network jointly optimizes the physical light path and neural reconstruction algorithm, enabling efficient luminous field reconstruction within a 46.6° × 37.6° viewing cone. Experiments demonstrate that our approach substantially lowers calibration accessibility—achieving high-fidelity light-field characterization under ambient illumination. Our primary contribution is establishing a novel framework that unifies lensless imaging with INRs for display characterization, thereby providing a technically viable pathway toward user-deployable, low-cost, yet high-accuracy display calibration.
📝 Abstract
Calibrating displays is a basic and regular task that content creators must perform to maintain optimal visual experience, yet it remains a troublesome issue. Measuring display characteristics from different viewpoints often requires specialized equipment and a dark room, making it inaccessible to most users. To avoid specialized hardware requirements in display calibrations, our work co-designs a lensless camera and an Implicit Neural Representation based algorithm for capturing display characteristics from various viewpoints. More specifically, our pipeline enables efficient reconstruction of light fields emitted from a display from a viewing cone of 46.6° X 37.6°. Our emerging pipeline paves the initial steps towards effortless display calibration and characterization.