đ¤ AI Summary
To address challenges in hyperspectral image (HSI) analysisâincluding high-dimensional spectral redundancy, significant spectral variability, and difficulty modeling long-range spectralâspatial dependenciesâthis paper proposes an end-to-end Transformer-based framework. Its key contributions are: (1) a Multi-Head Energy Attention (MHEA) mechanism that enhances discriminability of spectral responses; (2) Fourier Position Encoding (FoPE), enabling adaptive modeling of long-range spectralâspatial dependencies; and (3) an Enhanced Convolutional Block Attention Module (ECBAM) to improve band-wise selectivity and structural awareness. Evaluated on WHU-Hi-HanChuan, Salinas, and PaviaU datasets, the method achieves overall accuracies of 99.28%, 98.63%, and 98.72%, respectivelyâsubstantially outperforming CNNs, standard Transformers, and Mamba-based baselines. These results demonstrate the frameworkâs effectiveness and state-of-the-art performance for fine-grained HSI classification.
đ Abstract
Hyperspectral imaging (HSI) provides rich spectral-spatial information across hundreds of contiguous bands, enabling precise material discrimination in applications such as environmental monitoring, agriculture, and urban analysis. However, the high dimensionality and spectral variability of HSI data pose significant challenges for feature extraction and classification. This paper presents EnergyFormer, a transformer-based framework designed to address these challenges through three key innovations: (1) Multi-Head Energy Attention (MHEA), which optimizes an energy function to selectively enhance critical spectral-spatial features, improving feature discrimination; (2) Fourier Position Embedding (FoPE), which adaptively encodes spectral and spatial dependencies to reinforce long-range interactions; and (3) Enhanced Convolutional Block Attention Module (ECBAM), which selectively amplifies informative wavelength bands and spatial structures, enhancing representation learning. Extensive experiments on the WHU-Hi-HanChuan, Salinas, and Pavia University datasets demonstrate that EnergyFormer achieves exceptional overall accuracies of 99.28%, 98.63%, and 98.72%, respectively, outperforming state-of-the-art CNN, transformer, and Mamba-based models. The source code will be made available at https://github.com/mahmad000.