Learning to Sense for Driving: Joint Optics-Sensor-Model Co-Design for Semantic Segmentation

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional autonomous driving perception pipelines decouple optical design from downstream tasks, relying on fixed lenses and handcrafted ISP pipelines—causing irreversible RAW-domain information loss and forcing models to adapt to sensor-specific artifacts. This work proposes an end-to-end RAW-to-task semantic segmentation framework, the first to jointly optimize the full imaging stack: optical lens design, a learnable color filter array (CFA), physics-based sensor noise and quantization modeling, and a lightweight segmentation network (~1M parameters). Built upon the DeepLens framework, our method incorporates Poisson–Gaussian noise modeling, differentiable CFA learning, realistic mobile-scale lens simulation, and 8-bit quantization-aware training. Evaluated on KITTI-360, it achieves significant mIoU gains—particularly for slender objects and low-light categories—while maintaining robustness under real-world sensor imperfections. The optimized model runs at 28 FPS on edge hardware, enabling real-time deployment.

Technology Category

Application Category

📝 Abstract
Traditional autonomous driving pipelines decouple camera design from downstream perception, relying on fixed optics and handcrafted ISPs that prioritize human viewable imagery rather than machine semantics. This separation discards information during demosaicing, denoising, or quantization, while forcing models to adapt to sensor artifacts. We present a task-driven co-design framework that unifies optics, sensor modeling, and lightweight semantic segmentation networks into a single end-to-end RAW-to-task pipeline. Building on DeepLens[19], our system integrates realistic cellphone-scale lens models, learnable color filter arrays, Poisson-Gaussian noise processes, and quantization, all optimized directly for segmentation objectives. Evaluations on KITTI-360 show consistent mIoU improvements over fixed pipelines, with optics modeling and CFA learning providing the largest gains, especially for thin or low-light-sensitive classes. Importantly, these robustness gains are achieved with a compact ~1M-parameter model running at ~28 FPS, demonstrating edge deployability. Visual and quantitative analyses further highlight how co-designed sensors adapt acquisition to semantic structure, sharpening boundaries and maintaining accuracy under blur, noise, and low bit-depth. Together, these findings establish full-stack co-optimization of optics, sensors, and networks as a principled path toward efficient, reliable, and deployable perception in autonomous systems.
Problem

Research questions and friction points this paper is trying to address.

Jointly optimizes optics, sensor, and segmentation for autonomous driving
Replaces fixed camera pipelines with end-to-end RAW-to-task learning
Improves semantic segmentation robustness under blur, noise, and low-light
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint optics-sensor-model co-design framework
End-to-end RAW-to-task pipeline optimization
Learnable color filter arrays for semantics
🔎 Similar Papers
No similar papers found.