🤖 AI Summary
In federated learning for autonomous driving, privacy-preserving mechanisms—particularly differential privacy—exacerbate model unfairness across demographic groups, creating a critical tension between privacy and fairness.
Method: We propose a privacy-fairness co-optimization framework featuring (i) an adversarial privacy-decoupling mechanism—integrating gradient reversal layers with evidential neural networks—to suppress bias without access to sensitive attributes; and (ii) an uncertainty-guided adaptive weighted aggregation strategy to enhance robustness and generalization.
Contributions/Results: Evaluated on FACET and CARLA simulation platforms, our method achieves significant improvements in detection accuracy, reduces inter-group fairness disparity (ΔSPD) by 37%, lowers privacy attack success rate by 52%, and surpasses state-of-the-art methods in robustness. To the best of our knowledge, this is the first work to systematically address the tripartite trade-off among privacy, fairness, and utility in federated object detection.
📝 Abstract
Autonomous vehicles (AVs) increasingly rely on Federated Learning (FL) to enhance perception models while preserving privacy. However, existing FL frameworks struggle to balance privacy, fairness, and robustness, leading to performance disparities across demographic groups. Privacy-preserving techniques like differential privacy mitigate data leakage risks but worsen fairness by restricting access to sensitive attributes needed for bias correction. This work explores the trade-off between privacy and fairness in FL-based object detection for AVs and introduces RESFL, an integrated solution optimizing both. RESFL incorporates adversarial privacy disentanglement and uncertainty-guided fairness-aware aggregation. The adversarial component uses a gradient reversal layer to remove sensitive attributes, reducing privacy risks while maintaining fairness. The uncertainty-aware aggregation employs an evidential neural network to weight client updates adaptively, prioritizing contributions with lower fairness disparities and higher confidence. This ensures robust and equitable FL model updates. We evaluate RESFL on the FACET dataset and CARLA simulator, assessing accuracy, fairness, privacy resilience, and robustness under varying conditions. RESFL improves detection accuracy, reduces fairness disparities, and lowers privacy attack success rates while demonstrating superior robustness to adversarial conditions compared to other approaches.