🤖 AI Summary
To address radiologists’ clinical need for interpretable and verifiable style adjustments in X-ray imaging, this paper proposes an explainable style transfer method specifically designed for mammograms. Methodologically, it replaces conventional handcrafted feature extractors with a trainable local Laplacian filter; models nonlinear style mapping via a multilayer perceptron (MLP); and incorporates a learnable adaptive normalization layer to explicitly decouple style semantics from physical meaning. This framework is the first in medical image style transfer to jointly satisfy semantic interpretability and reliability verifiability. Evaluated on a mammography dataset, it achieves an SSIM of 0.94—significantly surpassing the baseline (0.82)—while generating clinically preferred images that preserve radiologically meaningful physical interpretations.
📝 Abstract
Radiologists have preferred visual impressions or 'styles' of X-ray images that are manually adjusted to their needs to support their diagnostic performance. In this work, we propose an automatic and interpretable X-ray style transfer by introducing a trainable version of the Local Laplacian Filter (LLF). From the shape of the LLF's optimized remap function, the characteristics of the style transfer can be inferred and reliability of the algorithm can be ensured. Moreover, we enable the LLF to capture complex X-ray style features by replacing the remap function with a Multi-Layer Perceptron (MLP) and adding a trainable normalization layer. We demonstrate the effectiveness of the proposed method by transforming unprocessed mammographic X-ray images into images that match the style of target mammograms and achieve a Structural Similarity Index (SSIM) of 0.94 compared to 0.82 of the baseline LLF style transfer method from Aubry et al.