🤖 AI Summary
In high-dimensional uncertain environments, probabilistic neural networks (PNNs) face challenges in learning optimal smoothing parameters; conventional gradient-based methods are prone to local optima, while individual metaheuristic algorithms suffer from limited exploration capability.
Method: This paper proposes a constrained hybrid metaheuristic (cHM) algorithm featuring a novel two-stage adaptive framework: (i) an exploration stage that dynamically selects among five algorithms—BAT, pollen-based optimization, bacterial foraging, simulated annealing, and particle swarm optimization—based on real-time classification error rates; and (ii) a fitting stage that jointly optimizes smoothing parameters within a constrained parameter space.
Contribution/Results: Evaluated on 16 benchmark datasets spanning binary/multi-class, balanced/imbalanced, and low/high-dimensional settings, cHM achieves significantly improved classification accuracy and generalization performance, faster convergence, and enhanced robustness—demonstrating the effectiveness and superiority of adaptive multi-algorithm integration for PNN parameter optimization.
📝 Abstract
This study investigates the potential of hybrid metaheuristic algorithms to enhance the training of Probabilistic Neural Networks (PNNs) by leveraging the complementary strengths of multiple optimisation strategies. Traditional learning methods, such as gradient-based approaches, often struggle to optimise high-dimensional and uncertain environments, while single-method metaheuristics may fail to exploit the solution space fully. To address these challenges, we propose the constrained Hybrid Metaheuristic (cHM) algorithm, a novel approach that combines multiple population-based optimisation techniques into a unified framework. The proposed procedure operates in two phases: an initial probing phase evaluates multiple metaheuristics to identify the best-performing one based on the error rate, followed by a fitting phase where the selected metaheuristic refines the PNN to achieve optimal smoothing parameters. This iterative process ensures efficient exploration and convergence, enhancing the network's generalisation and classification accuracy. cHM integrates several popular metaheuristics, such as BAT, Simulated Annealing, Flower Pollination Algorithm, Bacterial Foraging Optimization, and Particle Swarm Optimisation as internal optimisers. To evaluate cHM performance, experiments were conducted on 16 datasets with varying characteristics, including binary and multiclass classification tasks, balanced and imbalanced class distributions, and diverse feature dimensions. The results demonstrate that cHM effectively combines the strengths of individual metaheuristics, leading to faster convergence and more robust learning. By optimising the smoothing parameters of PNNs, the proposed method enhances classification performance across diverse datasets, proving its application flexibility and efficiency.