🤖 AI Summary
This paper addresses systemic bias against social subgroups—particularly minority groups—induced by vertex cover and feedback vertex set algorithms on real-world graphs in combinatorial optimization. We propose the first modeling paradigm that explicitly incorporates group fairness into the objective function. Unlike conventional approaches optimizing only for total cost, our framework introduces a weighted graph model annotated with group labels, designs approximation algorithms satisfying explicit group-fairness constraints, and establishes a quantifiable bias measurement framework. Theoretical analysis guarantees bounded approximation ratios under fairness constraints. Extensive experiments on diverse real-world and synthetic graphs demonstrate that our method reduces inter-group disparity in solution impact by 40–65%, while incurring only a marginal increase in total cost (<15%). This yields substantially improved algorithmic fairness and societal applicability without compromising computational efficiency.
📝 Abstract
A typical goal of research in combinatorial optimization is to come up with fast algorithms that find optimal solutions to a computational problem. The process that takes a real-world problem and extracts a clean mathematical abstraction of it often throws out a lot of "side information" which is deemed irrelevant. However, the discarded information could be of real significance to the end-user of the algorithm's output. All solutions of the same cost are not necessarily of equal impact in the real-world; some solutions may be much more desirable than others, even at the expense of additional increase in cost. If the impact, positive or negative, is mostly felt by some specific (minority) subgroups of the population, the population at large will be largely unaware of it. In this work we ask the question of finding solutions to combinatorial optimization problems that are "unbiased" with respect to a collection of specified subgroups of the total population.