🤖 AI Summary
To address inherent biases against minority groups in deployed machine learning models, this paper proposes a user-side, decentralized fairness optimization framework: without requiring enterprise involvement or modifications to the training pipeline, minority-group users collectively intervene by strategically relabeling their own input instances. The approach is model-agnostic, compatible with black-box models, and minimally invasive—marking the first effort to empower end users with agency over fairness improvement. We design three approximately optimal relabeling algorithms and evaluate them across multiple real-world datasets. Results show that adjusting labels for only a small fraction of samples significantly reduces inter-group predictive unfairness—e.g., decreasing equal opportunity difference by up to 40%—while preserving overall predictive accuracy, with classification error increasing by less than 0.5%.
📝 Abstract
Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups. Despite an array of existing firm-side bias mitigation techniques, they typically incur utility costs and require organizational buy-in. Recognizing that many models rely on user-contributed data, end-users can induce fairness through the framework of Algorithmic Collective Action, where a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm's training process. We propose three practical, model-agnostic methods to approximate ideal relabeling and validate them on real-world datasets. Our findings show that a subgroup of the minority can substantially reduce unfairness with a small impact on the overall prediction error.