🤖 AI Summary
This work addresses the limitations of existing approaches in multi-agent multi-objective optimization, which rely on mean aggregation and neglect fairness among agents while only evaluating strict common Pareto optimal solutions—thus failing to capture balanced utilities and consensus among heterogeneous decision-makers. From a cooperative game-theoretic perspective, the paper proposes a fairness-aware performance evaluation framework that generalizes the notion of optimality via concession rate vectors, embeds classical metrics into a Nash product–based evaluation function, and establishes a theoretical foundation satisfying four axioms of fairness. Experimental results demonstrate that the framework effectively discriminates algorithmic performance with and without strict common solutions, assigning higher scores to algorithms that achieve better coverage of consensus regions and more equitable utility distributions, thereby realizing, for the first time, fairness evaluation driven by a Nash product mechanism.
📝 Abstract
In multiparty multiobjective optimization problems, solution sets are usually evaluated using classical performance metrics, aggregated across DMs. However, such mean-based evaluations may be unfair by favoring certain parties, as they assume identical geometric approximation quality to each party's PF carries comparable evaluative significance. Moreover, prevailing notions of MPMOP optimal solutions are restricted to strictly common Pareto optimal solutions, representing a narrow form of cooperation in multiparty decision making scenarios. These limitations obscure whether a solution set reflects balanced relative gains or meaningful consensus among heterogeneous DMs. To address these issues, this paper develops a fairness-aware performance evaluation framework grounded in a generalized notion of consensus solutions. From a cooperative game-theoretic perspective, we formalize four axioms that a fairness-aware evaluation function for MPMOPs should satisfy. By introducing a concession rate vector to quantify acceptable compromises by individual DMs, we generalize the classical definition of MPMOP optimal solutions and embed classical performance metrics into a Nash-product-based evaluation framework, which is theoretically shown to satisfy all axioms. To support empirical validation, we further construct benchmark problems that extend existing MPMOP suites by incorporating consensus-deficient negotiation structures. Experimental results demonstrate that the proposed evaluation framework is able to distinguish algorithmic performance in a manner consistent with consensus-aware fairness considerations. Specifically, algorithms converging toward strictly common solutions are assigned higher evaluation scores when such solutions exist, whereas in the absence of strictly common solutions, algorithms that effectively cover the commonly acceptable region are more favorably evaluated.