Decoding Federated Learning: The FedNAM+ Conformal Revolution

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning faces three core challenges: insufficient uncertainty quantification, limited interpretability, and inadequate robustness. To address these, we propose FedNAM+, the first federated interpretable framework integrating Neural Additive Models (NAMs) with conformal prediction, enabling pixel-level confidence analysis and global uncertainty calibration. Our method introduces a novel gradient-sensitive map–driven dynamic layer adjustment mechanism, visualizing prediction reliability and yielding statistically valid confidence intervals—unattainable via LIME or SHAP. Evaluated on CT, MNIST, and CIFAR datasets, FedNAM+ incurs only a 0.1% accuracy drop on MNIST while substantially reducing communication and computational overhead. FedNAM+ is the first approach to jointly achieve high-fidelity interpretability, rigorous uncertainty quantification, and lightweight deployment within the federated learning paradigm, thereby enhancing transparency and trustworthiness of distributed AI systems.

Technology Category

Application Category

📝 Abstract
Federated learning has significantly advanced distributed training of machine learning models across decentralized data sources. However, existing frameworks often lack comprehensive solutions that combine uncertainty quantification, interpretability, and robustness. To address this, we propose FedNAM+, a federated learning framework that integrates Neural Additive Models (NAMs) with a novel conformal prediction method to enable interpretable and reliable uncertainty estimation. Our method introduces a dynamic level adjustment technique that utilizes gradient-based sensitivity maps to identify key input features influencing predictions. This facilitates both interpretability and pixel-wise uncertainty estimates. Unlike traditional interpretability methods such as LIME and SHAP, which do not provide confidence intervals, FedNAM+ offers visual insights into prediction reliability. We validate our approach through experiments on CT scan, MNIST, and CIFAR datasets, demonstrating high prediction accuracy with minimal loss (e.g., only 0.1% on MNIST), along with transparent uncertainty measures. Visual analysis highlights variable uncertainty intervals, revealing low-confidence regions where model performance can be improved with additional data. Compared to Monte Carlo Dropout, FedNAM+ delivers efficient and global uncertainty estimates with reduced computational overhead, making it particularly suitable for federated learning scenarios. Overall, FedNAM+ provides a robust, interpretable, and computationally efficient framework that enhances trust and transparency in decentralized predictive modeling.
Problem

Research questions and friction points this paper is trying to address.

Combines uncertainty quantification, interpretability, and robustness in federated learning
Enables interpretable and reliable uncertainty estimation in decentralized data training
Improves model transparency and trust with dynamic feature sensitivity analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Neural Additive Models with conformal prediction
Uses gradient-based sensitivity maps for interpretability
Provides efficient global uncertainty estimates
🔎 Similar Papers
No similar papers found.