RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning by Balancing Privacy, Fairness and Utility in Autonomous Vehicles

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning for autonomous driving, privacy-preserving mechanisms—particularly differential privacy—exacerbate model unfairness across demographic groups, creating a critical tension between privacy and fairness. Method: We propose a privacy-fairness co-optimization framework featuring (i) an adversarial privacy-decoupling mechanism—integrating gradient reversal layers with evidential neural networks—to suppress bias without access to sensitive attributes; and (ii) an uncertainty-guided adaptive weighted aggregation strategy to enhance robustness and generalization. Contributions/Results: Evaluated on FACET and CARLA simulation platforms, our method achieves significant improvements in detection accuracy, reduces inter-group fairness disparity (ΔSPD) by 37%, lowers privacy attack success rate by 52%, and surpasses state-of-the-art methods in robustness. To the best of our knowledge, this is the first work to systematically address the tripartite trade-off among privacy, fairness, and utility in federated object detection.

Technology Category

Application Category

📝 Abstract
Autonomous vehicles (AVs) increasingly rely on Federated Learning (FL) to enhance perception models while preserving privacy. However, existing FL frameworks struggle to balance privacy, fairness, and robustness, leading to performance disparities across demographic groups. Privacy-preserving techniques like differential privacy mitigate data leakage risks but worsen fairness by restricting access to sensitive attributes needed for bias correction. This work explores the trade-off between privacy and fairness in FL-based object detection for AVs and introduces RESFL, an integrated solution optimizing both. RESFL incorporates adversarial privacy disentanglement and uncertainty-guided fairness-aware aggregation. The adversarial component uses a gradient reversal layer to remove sensitive attributes, reducing privacy risks while maintaining fairness. The uncertainty-aware aggregation employs an evidential neural network to weight client updates adaptively, prioritizing contributions with lower fairness disparities and higher confidence. This ensures robust and equitable FL model updates. We evaluate RESFL on the FACET dataset and CARLA simulator, assessing accuracy, fairness, privacy resilience, and robustness under varying conditions. RESFL improves detection accuracy, reduces fairness disparities, and lowers privacy attack success rates while demonstrating superior robustness to adversarial conditions compared to other approaches.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy, fairness, and utility in federated learning for autonomous vehicles.
Addressing performance disparities across demographic groups in FL-based object detection.
Mitigating privacy risks while maintaining fairness in AV perception models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial privacy disentanglement for sensitive attribute removal
Uncertainty-guided fairness-aware aggregation for equitable updates
Evidential neural network for adaptive client weighting
🔎 Similar Papers
No similar papers found.