🤖 AI Summary
To address low classification accuracy and poor generalization in onion pest and disease multiclass recognition—caused by severe class imbalance and limited labeled samples—this paper proposes a CNN framework integrating channel-spatial attention mechanisms with systematic deep data augmentation. A lightweight attention module is embedded into a pretrained ResNet backbone to enhance discriminative feature representation from critical disease regions. Furthermore, we synergistically combine Mixup and CutMix with field-adapted geometric and photometric augmentations to achieve class-balanced training and improved semantic robustness. Evaluated on a real-world field image dataset, the proposed model achieves 96.90% overall accuracy and an F1-score of 0.96—substantially outperforming existing binary-classification and conventional multiclass approaches. This work delivers a high-accuracy, generalizable, small-sample solution for intelligent crop disease diagnosis.
📝 Abstract
Accurate classification of pests and diseases plays a vital role in precision agriculture, enabling efficient identification, targeted interventions, and preventing their further spread. However, current methods primarily focus on binary classification, which limits their practical applications, especially in scenarios where accurately identifying the specific type of disease or pest is essential. We propose a robust deep learning based model for multi-class classification of onion crop diseases and pests. We enhance a pre-trained Convolutional Neural Network (CNN) model by integrating attention based modules and employing comprehensive data augmentation pipeline to mitigate class imbalance. We propose a model which gives 96.90% overall accuracy and 0.96 F1 score on real-world field image dataset. This model gives better results than other approaches using the same datasets.