A supervised discriminant data representation: application to pattern classification

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data representation in supervised multi-class classification, this paper proposes a unified linear feature extraction method, RSLDA-ICS_DLSR, integrating Sparse Linear Discriminant Analysis (SLDA) with Discriminative Least Squares Regression (DLSR). The core contribution lies in a joint optimization framework that enforces row-wise sparsity consistency within classes while enabling flexible integration and parameter tuning of diverse discriminative criteria. The method jointly learns a linear transformation matrix and an orthogonal matrix via sparse regularization, iterative alternating minimization, and multi-strategy initialization. Extensive experiments on benchmark datasets—including AR and Extended YaleB (face), Caltech-101 (object), and MNIST (handwritten digits)—demonstrate that RSLDA-ICS_DLSR consistently outperforms state-of-the-art linear discriminant methods in classification accuracy, validating its superior representation capability and generalization performance.

Technology Category

Application Category

📝 Abstract
The performance of machine learning and pattern recognition algorithms generally depends on data representation. That is why, much of the current effort in performing machine learning algorithms goes into the design of preprocessing frameworks and data transformations able to support effective machine learning. The method proposed in this work consists of a hybrid linear feature extraction scheme to be used in supervised multi-class classification problems. Inspired by two recent linear discriminant methods: robust sparse linear discriminant analysis (RSLDA) and inter-class sparsitybased discriminative least square regression (ICS_DLSR), we propose a unifying criterion that is able to retain the advantages of these two powerful methods. The resulting transformation relies on sparsity-promoting techniques both to select the features that most accurately represent the data and to preserve the row-sparsity consistency property of samples from the same class. The linear transformation and the orthogonal matrix are estimated using an iterative alternating minimization scheme based on steepest descent gradient method and different initialization schemes. The proposed framework is generic in the sense that it allows the combination and tuning of other linear discriminant embedding methods. According to the experiments conducted on several datasets including faces, objects, and digits, the proposed method was able to outperform competing methods in most cases.
Problem

Research questions and friction points this paper is trying to address.

Developing hybrid linear feature extraction for supervised classification
Unifying RSLDA and ICS_DLSR methods to retain their advantages
Enhancing pattern recognition through sparsity-based feature selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid linear feature extraction for classification
Sparsity techniques for feature selection and consistency
Iterative alternating minimization with gradient descent
🔎 Similar Papers
No similar papers found.