Evaluation of Multi- and Single-objective Learning Algorithms for Imbalanced Data

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation frameworks struggle to unify the assessment of single-objective and multi-objective optimization (MOO) algorithms for imbalanced classification, hindering fair and interpretable model selection. Method: This paper proposes a user-preference-driven cross-paradigm performance comparison framework. It projects the high-dimensional objective space into a lower-dimensional space via aggregation criteria, integrates Pareto front extraction with preference modeling, and quantifies the relative performance of both single-solution and MOO algorithms under a unified benchmark. Contribution/Results: The framework systematically bridges the long-standing evaluation gap between single-objective and multi-objective learning algorithms for the first time, significantly enhancing interpretability and practicality in algorithm selection. Extensive experiments across multiple imbalanced classification benchmarks demonstrate that the method improves the reliability, consistency, and decision-support capability of algorithm comparisons.

Technology Category

Application Category

📝 Abstract
Many machine learning tasks aim to find models that work well not for a single, but for a group of criteria, often opposing ones. One such example is imbalanced data classification, where, on the one hand, we want to achieve the best possible classification quality for data from the minority class without degrading the classification quality of the majority class. One solution is to propose an aggregate learning criterion and reduce the multi-objective learning task to a single-criteria optimization problem. Unfortunately, such an approach is characterized by ambiguity of interpretation since the value of the aggregated criterion does not indicate the value of the component criteria. Hence, there are more and more proposals for algorithms based on multi-objective optimization (MOO), which can simultaneously optimize multiple criteria. However, such an approach results in a set of multiple non-dominated solutions (Pareto front). The selection of a single solution from the Pareto front is a challenge itself, and much attention is paid to the issue of how to select it considering user preferences, as well as how to compare solutions returned by different MOO algorithms among themselves. Thus, a significant gap has been identified in the classifier evaluation methodology, i.e., how to reliably compare methods returning single solutions with algorithms returning solutions in the form of Pareto fronts. To fill the aforementioned gap, this article proposes a new, reliable way of evaluating algorithms based on multi-objective algorithms with methods that return single solutions while pointing out solutions from a Pareto front tailored to the user's preferences. This work focuses only on algorithm comparison, not their learning. The algorithms selected for this study are illustrative to help understand the proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multi-objective vs single-objective algorithms for imbalanced data
Comparing Pareto front solutions with single-solution methods
Selecting optimal solutions from Pareto fronts using user preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-objective optimization for imbalanced data classification
Comparing Pareto front solutions with single-criterion methods
User preference-based selection from multiple non-dominated solutions
🔎 Similar Papers
2024-05-13European Conference on Artificial IntelligenceCitations: 1
Szymon Wojciechowski
Szymon Wojciechowski
Wrocław University of Science and Technology (Politechnika Wrocławska)
machine learning
M
Michał Woźniak
Department of Systems and Computer Networks, Wrocław University of Science and Technology, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland