🤖 AI Summary
In federated learning, evaluating client contributions faces challenges of high computational complexity and the intractability of exact Shapley value computation. This paper proposes FedOwen: the first framework to introduce Owen sampling into federated learning for efficient Shapley value approximation to quantify client contributions; it designs an adaptive client selection mechanism balancing exploration and exploitation to dynamically identify high-value clients under constrained evaluation budgets; and it is specifically tailored for non-IID data settings. Experiments demonstrate that, under identical communication rounds and evaluation costs, FedOwen achieves up to a 23% improvement in model accuracy over state-of-the-art methods, significantly accelerates global model convergence, and effectively identifies sparse yet highly informative clients.
📝 Abstract
Federated Learning (FL) aggregates information from multiple clients to train a shared global model without exposing raw data. Accurately estimating each client's contribution is essential not just for fair rewards, but for selecting the most useful clients so the global model converges faster. The Shapley value is a principled choice, yet exact computation scales exponentially with the number of clients, making it infeasible for large federations. We propose FedOwen, an efficient framework that uses Owen sampling to approximate Shapley values under the same total evaluation budget as existing methods while keeping the approximation error small. In addition, FedOwen uses an adaptive client selection strategy that balances exploiting high-value clients with exploring under-sampled ones, reducing bias and uncovering rare but informative data. Under a fixed valuation cost, FedOwen achieves up to 23 percent higher final accuracy within the same number of communication rounds compared to state-of-the-art baselines on non-IID benchmarks.