๐ค AI Summary
This work addresses the challenge of inaccurate relative camera pose estimation in in-cabin fisheye imaging, which suffers from severe distortion and spatial constraints. The authors propose a single-pass Transformer-based architecture that leverages a frozen DINOv2 backbone for feature extraction and a ViT-Small decoder to model geometric correspondences between reference and target images, directly regressing metric-scale translation and rotation. Notably, the model is trained exclusively on synthetic data yet achieves strong domain generalization to real in-cabin scenes without requiring known camera intrinsics, yielding physically plausible poses. Evaluated on both the newly introduced In-Cabin-Pose benchmark and the 7-Scenes dataset, the method demonstrates high accuracy and real-time performance, with code and dataset publicly released to support safety-critical driver monitoring applications.
๐ Abstract
Camera extrinsic calibration is a fundamental task in computer vision. However, precise relative pose estimation in constrained, highly distorted environments, such as in-cabin automotive monitoring (ICAM), remains challenging. We present InCaRPose, a Transformer-based architecture designed for robust relative pose prediction between image pairs, which can be used for camera extrinsic calibration. By leveraging frozen backbone features such as DINOv3 and a Transformer-based decoder, our model effectively captures the geometric relationship between a reference and a target view. Unlike traditional methods, our approach achieves absolute metric-scale translation within the physically plausible adjustment range of in-cabin camera mounts in a single inference step, which is critical for ICAM, where accurate real-world distances are required for safety-relevant perception. We specifically address the challenges of highly distorted fisheye cameras in automotive interiors by training exclusively on synthetic data. Our model is capable of generalization to real-world cabin environments without relying on the exact same camera intrinsics and additionally achieves competitive performance on the public 7-Scenes dataset. Despite having limited training data, InCaRPose maintains high precision in both rotation and translation, even with a ViT-Small backbone. This enables real-time performance for time-critical inference, such as driver monitoring in supervised autonomous driving. We release our real-world In-Cabin-Pose test dataset consisting of highly distorted vehicle-interior images and our code at https://github.com/felixstillger/InCaRPose.