🤖 AI Summary
This paper addresses fairness risks in AI systems within software engineering practice, stemming from biased training data. It presents the first systematic literature review specifically focused on fairness interventions in software engineering. Adopting a systematic literature review methodology augmented by a multi-dimensional analytical framework, the study identifies, evaluates, and synthesizes 127 relevant studies published between 2015 and 2023. It proposes a novel three-stage fairness intervention taxonomy—encompassing preprocessing, in-processing, and post-processing techniques—tailored to the software development lifecycle. The analysis spans critical domains including healthcare and finance, delineating technical applicability boundaries and practical limitations. Key cross-domain challenges are identified, notably the fairness–utility trade-off and lack of model interpretability, alongside emerging trends such as the shift from static bias detection to dynamic fairness assurance. The work establishes a foundational theoretical framework and actionable guidelines for integrating fairness considerations throughout software development.
📝 Abstract
Current developments in AI made it broadly significant for reducing human labor and expenses across several essential domains, including healthcare and finance. However, the application of AI in the actual world poses multiple risks and disadvantages due to potential risk factors in data (e.g., biased dataset). Practitioners developed a number of fairness interventions for addressing these kinds of problems. The paper acts as a survey, summarizing the various studies and approaches that have been developed to address fairness issues