🤖 AI Summary
Medical imaging artifacts severely compromise diagnostic accuracy; existing image-based methods rely on preprocessing, leading to information loss and high memory consumption, thereby limiting model scalability. To address these limitations, this work introduces implicit neural representations (INRs) to medical image quality assessment for the first time. We propose an end-to-end framework that parameterizes INRs via a deep weight network, incorporates a graph neural network (GNN) to model local structural relationships, and employs a relation-aware attention Transformer to capture long-range quality dependencies. By operating directly on low-dimensional continuous representations, our method inherently supports multi-resolution and arbitrary-size inputs without resampling or patching. Experiments on the ACDC dataset demonstrate performance competitive with state-of-the-art methods, while reducing parameter count by 32% and GPU memory usage by 47%, significantly enhancing computational efficiency and deployability.
📝 Abstract
Artifacts pose a significant challenge in medical imaging, impacting diagnostic accuracy and downstream analysis. While image-based approaches for detecting artifacts can be effective, they often rely on preprocessing methods that can lead to information loss and high-memory-demand medical images, thereby limiting the scalability of classification models. In this work, we propose the use of implicit neural representations (INRs) for image quality assessment. INRs provide a compact and continuous representation of medical images, naturally handling variations in resolution and image size while reducing memory overhead. We develop deep weight space networks, graph neural networks, and relational attention transformers that operate on INRs to achieve image quality assessment. Our method is evaluated on the ACDC dataset with synthetically generated artifact patterns, demonstrating its effectiveness in assessing image quality while achieving similar performance with fewer parameters.