π€ AI Summary
This paper addresses camera self-calibration without scene priorsβi.e., estimating focal length and principal point solely from an image sequence. We propose TrifocalCalib, a novel method that formulates geometric constraints via the trifocal tensor, requiring no calibration objects or assumptions about camera motion and operating on only a small set of uncalibrated images to jointly estimate projective intrinsic parameters. Our key contribution lies in explicitly modeling the trifocal tensor as a differentiable calibration module, optimized jointly on synthetic and real data; this design ensures computational efficiency while substantially improving accuracy and robustness. Extensive experiments demonstrate that TrifocalCalib outperforms both classical and learning-based state-of-the-art methods on synthetic and real-world benchmarks. To ensure reproducibility, the source code is publicly released.
π Abstract
Estimating camera intrinsic parameters without prior scene knowledge is a fundamental challenge in computer vision. This capability is particularly important for applications such as autonomous driving and vehicle platooning, where precalibrated setups are impractical and real-time adaptability is necessary. To advance the state-of-the-art, we present a set of equations based on the calibrated trifocal tensor, enabling projective camera self-calibration from minimal image data. Our method, termed TrifocalCalib, significantly improves accuracy and robustness compared to both recent learning-based and classical approaches. Unlike many existing techniques, our approach requires no calibration target, imposes no constraints on camera motion, and simultaneously estimates both focal length and principal point. Evaluations in both procedurally generated synthetic environments and structured dataset-based scenarios demonstrate the effectiveness of our approach. To support reproducibility, we make the code publicly available.