From Snow to Rain: Evaluating Robustness, Calibration, and Complexity of Model-Based Robust Training

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited robustness and high computational cost of deep learning models under natural perturbations such as rain and snow. To this end, we propose a robust training framework based on a learned nuisance variable model, which integrates model-driven data augmentation with adversarial optimization through a hybrid training strategy. This approach achieves strong robustness while significantly reducing computational overhead. Experiments on the CURE-TSR traffic sign recognition dataset demonstrate that our method consistently outperforms baseline approaches—including Vanilla training, adversarial training, and AugMix—across various perturbation intensities. Notably, adversarial model training yields the highest robustness, whereas model-driven data augmentation attains comparable performance at substantially lower complexity and further enhances model calibration.

Technology Category

Application Category

📝 Abstract
Robustness to natural corruptions remains a critical challenge for reliable deep learning, particularly in safety-sensitive domains. We study a family of model-based training approaches that leverage a learned nuisance variation model to generate realistic corruptions, as well as new hybrid strategies that combine random coverage with adversarial refinement in nuisance space. Using the Challenging Unreal and Real Environments for Traffic Sign Recognition dataset (CURE-TSR), with Snow and Rain corruptions, we evaluate accuracy, calibration, and training complexity across corruption severities. Our results show that model-based methods consistently outperform baselines Vanilla, Adversarial Training, and AugMix baselines, with model-based adversarial training providing the strongest robustness under across all corruptions but at the expense of higher computation and model-based data augmentation achieving comparable robustness with $T$ less computational complexity without incurring a statistically significant drop in performance. These findings highlight the importance of learned nuisance models for capturing natural variability, and suggest a promising path toward more resilient and calibrated models under challenging conditions.
Problem

Research questions and friction points this paper is trying to address.

robustness
natural corruptions
calibration
model-based training
nuisance variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

model-based robust training
nuisance variation model
natural corruptions
adversarial refinement
calibration
🔎 Similar Papers
No similar papers found.
J
Josu'e Mart'inez-Mart'inez
MIT Lincoln Laboratory
O
Olivia Brown
MIT Lincoln Laboratory
G
Giselle Zeno
MIT Lincoln Laboratory
Pooya Khorrami
Pooya Khorrami
MIT Lincoln Laboratory
Facial Expression RecognitionAffective ComputingDeep LearningImage ProcessingComputer Vision
R
Rajmonda Caceres
MIT Lincoln Laboratory