🤖 AI Summary
Early identification of diabetic foot ulcer (DFU) infection remains challenging—especially in resource-constrained settings with incomplete medical records or inexperienced nursing staff. To address this, we propose a vision-language joint modeling framework. Our method introduces SCARWID, a synthetic caption-augmented retrieval framework, and Wound-BLIP, a novel multimodal model specifically designed for wound description. It integrates high-fidelity textual descriptions generated by GPT-4o, latent diffusion-based synthetic wound image generation, and cross-modal cross-attention mechanisms, coupled with k-nearest-neighbor support-set retrieval for interpretable classification. On the binary infection classification task, our approach achieves 0.85 sensitivity, 0.78 specificity, and 0.81 accuracy. Crucially, it enhances clinicians’ understanding of and trust in AI-driven decisions through transparent, case-based reasoning. This work establishes a new paradigm for intelligent, explainable wound assessment in low-resource clinical environments.
📝 Abstract
Infections in Diabetic Foot Ulcers (DFUs) can cause severe complications, including tissue death and limb amputation, highlighting the need for accurate, timely diagnosis. Previous machine learning methods have focused on identifying infections by analyzing wound images alone, without utilizing additional metadata such as medical notes. In this study, we aim to improve infection detection by introducing Synthetic Caption Augmented Retrieval for Wound Infection Detection (SCARWID), a novel deep learning framework that leverages synthetic textual descriptions to augment DFU images. SCARWID consists of two components: (1) Wound-BLIP, a Vision-Language Model (VLM) fine-tuned on GPT-4o-generated descriptions to synthesize consistent captions from images; and (2) an Image-Text Fusion module that uses cross-attention to extract cross-modal embeddings from an image and its corresponding Wound-BLIP caption. Infection status is determined by retrieving the top-k similar items from a labeled support set. To enhance the diversity of training data, we utilized a latent diffusion model to generate additional wound images. As a result, SCARWID outperformed state-of-the-art models, achieving average sensitivity, specificity, and accuracy of 0.85, 0.78, and 0.81, respectively, for wound infection classification. Displaying the generated captions alongside the wound images and infection detection results enhances interpretability and trust, enabling nurses to align SCARWID outputs with their medical knowledge. This is particularly valuable when wound notes are unavailable or when assisting novice nurses who may find it difficult to identify visual attributes of wound infection.