🤖 AI Summary
This study addresses the challenge of automated multi-class classification of brain tumors—including glioma, meningioma, pituitary tumor, and non-tumorous cases—in MRI images. The authors propose a novel approach that integrates colormap-based feature representation with Vision Transformer (ViT) architecture. By leveraging colormaps to enhance critical structural details and intensity variations in MRI scans and employing ViT to capture long-range spatial dependencies, the method substantially improves discriminative capability and generalization performance. To the best of the authors’ knowledge, this is the first work to combine colormap mapping with ViT for brain tumor classification. Evaluated on the BRATS2025 dataset, the proposed model achieves an accuracy of 98.90% and an AUC of 99.97%, significantly outperforming established CNN baselines such as ResNet50, ResNet101, and EfficientNetB2.
📝 Abstract
Accurate classification of brain tumors from magnetic resonance imaging (MRI) plays a critical role in early diagnosis and effective treatment planning. In this study, we propose a deep learning framework based on Vision Transformers (ViT) enhanced with colormap-based feature representation to improve multi-class brain tumor classification performance. The proposed approach leverages the ability of transformer architectures to capture long-range dependencies while incorporating color mapping techniques to emphasize important structural and intensity variations within MRI scans.
Experiments are conducted on the BRISC2025 dataset, which includes four classes: glioma, meningioma, pituitary tumor, and non-tumor cases. The model is trained and evaluated using standard performance metrics such as accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). The proposed method achieves a classification accuracy of 98.90%, outperforming baseline convolutional neural network models including ResNet50, ResNet101, and EfficientNetB2. In addition, the model demonstrates strong generalization capability with an AUC of 99.97%, indicating high discriminative performance across all classes. These results highlight the effectiveness of combining Vision Transformers with colormap-based feature enhancement for accurate and robust brain tumor classification and suggest strong potential for clinical decision support applications.