Weakly-Supervised PET Anomaly Detection using Implicitly-Guided Attention-Conditional Counterfactual Diffusion Modeling: a Multi-Center, Multi-Cancer, and Multi-Tracer Study

📅 2024-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high cost of pixel-level annotations in PET lesion detection, this paper proposes a weakly supervised anomaly detection method requiring only image-level healthy/abnormal labels—no lesion localization annotations. Our approach leverages an implicitly guided, attention-conditioned counterfactual diffusion model to synthesize “abnormal→healthy” PET images; lesions are localized via differential maps between original and synthesized images. We present the first systematic evaluation of weakly supervised generalization across multi-center, multi-cancer-type, and multi-radiotracer PET data (2,652 cases), and uncover the critical role of attention mechanisms in detecting subtle lesions. Compared to GAN- and VAE-based baselines and conventional SUVmax thresholding, our method achieves a 12.7% improvement in sensitivity, significantly outperforming existing approaches. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Minimizing the need for pixel-level annotated data to train PET lesion detection and segmentation networks is highly desired and can be transformative, given time and cost constraints associated with expert annotations. Current un-/weakly-supervised anomaly detection methods rely on autoencoder or generative adversarial networks trained only on healthy data; however GAN-based networks are more challenging to train due to issues with simultaneous optimization of two competing networks, mode collapse, etc. In this paper, we present the weakly-supervised Implicitly guided COuNterfactual diffusion model for Detecting Anomalies in PET images (IgCONDA-PET). The solution is developed and validated using PET scans from six retrospective cohorts consisting of a total of 2652 cases containing both local and public datasets. The training is conditioned on image class labels (healthy vs. unhealthy) via attention modules, and we employ implicit diffusion guidance. We perform counterfactual generation which facilitates"unhealthy-to-healthy"domain translation by generating a synthetic, healthy version of an unhealthy input image, enabling the detection of anomalies through the calculated differences. The performance of our method was compared against several other deep learning based weakly-supervised or unsupervised methods as well as traditional methods like 41% SUVmax thresholding. We also highlight the importance of incorporating attention modules in our network for the detection of small anomalies. The code is publicly available at: https://github.com/ahxmeds/IgCONDA-PET.git.
Problem

Research questions and friction points this paper is trying to address.

Detect PET anomalies with minimal annotated data
Improve weakly-supervised anomaly detection using diffusion models
Generate healthy counterfactuals for anomaly identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weakly-supervised anomaly detection
Attention-guided counterfactual diffusion
Multi-center PET image analysis
🔎 Similar Papers
No similar papers found.