๐ค AI Summary
Traditional convolutional denoising autoencoders (DAEs) for dental panoramic X-ray images suffer from insufficient recovery of high-frequency anatomical details, while existing self-attention mechanisms often overlook noise-obscured critical structures. To address this, we propose a Noise-Aware Self-Attention Mechanism (NASAM), which explicitly models noise distribution to steer attention toward salient high-frequency regions masked by noiseโthereby overcoming the bias of conventional methods toward clean, low-detail areas. Integrated into a lightweight convolutional DAE architecture, NASAM enhances reconstruction fidelity of fine anatomical structures (e.g., root apices, trabecular bone) without significant parameter overhead. Experiments on public dental X-ray datasets demonstrate that our method achieves superior PSNR and SSIM compared to state-of-the-art models including Uformer and MResDNN. Moreover, qualitative evaluation confirms marked improvements in image interpretability and clinical diagnostic utility.
๐ Abstract
Convolutional denoising autoencoders (DAEs) are powerful tools for image restoration. However, they inherit a key limitation of convolutional neural networks (CNNs): they tend to recover low-frequency features, such as smooth regions, more effectively than high-frequency details. This leads to the loss of fine details, which is particularly problematic in dental radiographs where preserving subtle anatomical structures is crucial. While self-attention mechanisms can help mitigate this issue by emphasizing important features, conventional attention methods often prioritize features corresponding to cleaner regions and may overlook those obscured by noise. To address this limitation, we propose a noise-aware self-attention method, which allows the model to effectively focus on and recover key features even within noisy regions. Building on this approach, we introduce the noise-aware attention-enhanced denoising autoencoder (NAADA) network for enhancing noisy panoramic dental radiographs. Compared with the recent state of the art (and much heavier) methods like Uformer, MResDNN etc., our method improves the reconstruction of fine details, ensuring better image quality and diagnostic accuracy.