🤖 AI Summary
This study investigates whether diffusion models for text-to-image generation can implicitly encode pathological speech features of dementia patients and achieve cross-modal semantic alignment. Method: Leveraging transcribed speech from the ADReSS dataset as input, we drive Stable Diffusion to generate images and employ Class Activation Mapping (CAM) alongside feature attribution techniques to interpret the language–image alignment mechanism. Contribution/Results: We demonstrate that dementia classification—distinguishing patients from controls—achieves 75% accuracy using generated images alone, providing the first evidence that synthetic visual representations inherently encode discriminative neuropathological information. Interpretability analysis further identifies diagnostically salient linguistic markers—such as pronoun omission and semantically vague expressions—as key drivers of image content. This work establishes a novel, unsupervised, non-invasive paradigm for neurodegenerative disease screening grounded in generative multimodal representation learning.
📝 Abstract
Text-to-image models generate highly realistic images based on natural language descriptions and millions of users use them to create and share images online. While it is expected that such models can align input text and generated image in the same latent space little has been done to understand whether this alignment is possible between pathological speech and generated images. In this work, we examine the ability of such models to align dementia-related speech information with the generated images and develop methods to explain this alignment. Surprisingly, we found that dementia detection is possible from generated images alone achieving 75% accuracy on the ADReSS dataset. We then leverage explainability methods to show which parts of the language contribute to the detection.