🤖 AI Summary
To address the limited discriminative capability of naïve Bayes stemming from its strong conditional independence (isotropic) assumption, this paper proposes Projection Naïve Bayes (PNB), which learns an optimal linear subspace via discriminative projection optimization and performs naïve Bayes factorization of class-conditional densities within this low-dimensional projected space. PNB is the first framework to deeply integrate discriminative projection learning with naïve Bayes modeling, simultaneously enabling dimensionality reduction, visualization, and theoretical interpretability; it is further shown to be equivalent to class-conditional independent component analysis. Extensive experiments across 162 public benchmark datasets demonstrate that PNB significantly outperforms classical probabilistic discriminative models—including Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA)—and matches the accuracy of Support Vector Machines (SVM), while retaining the statistical interpretability and computational efficiency inherent to generative models.
📝 Abstract
In the Naive Bayes classification model the class conditional densities are estimated as the products of their marginal densities along the cardinal basis directions. We study the problem of obtaining an alternative basis for this factorisation with the objective of enhancing the discriminatory power of the associated classification model. We formulate the problem as a projection pursuit to find the optimal linear projection on which to perform classification. Optimality is determined based on the multinomial likelihood within which probabilities are estimated using the Naive Bayes factorisation of the projected data. Projection pursuit offers the added benefits of dimension reduction and visualisation. We discuss an intuitive connection with class conditional independent components analysis, and show how this is realised visually in practical applications. The performance of the resulting classification models is investigated using a large collection of (162) publicly available benchmark data sets and in comparison with relevant alternatives. We find that the proposed approach substantially outperforms other popular probabilistic discriminant analysis models and is highly competitive with Support Vector Machines.