Beyond vividness: Content analysis of induced hallucinations reveals the hidden structure of individual differences in visual imagery

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how visual imagery ability—categorized as absent, typical, or vivid—affects the content characteristics of Ganzflicker-induced visual hallucinations. Using over 4,000 subjective textual reports, we integrated vision-language model (VLM) embeddings with multimodal behavioral data (eye-tracking and hand-motion sensors) to systematically analyze the semantic structure and cross-modal associations of hallucinatory content. Results show that high-imagery individuals predominantly report concrete, naturalistic scenes, whereas low-imagery participants report predominantly geometric patterns. VLMS significantly outperformed text-only models in decoding imagery strength, and high-imagery subjects’ linguistic descriptions exhibited stronger sensorimotor coupling. This work provides the first empirical evidence linking individual differences in visual imagery capacity to distinct neurocomputational mechanisms underlying hallucination generation, offering cross-modal insights into how visual cortex–higher-order cortical interactions shape idiosyncratic internal representations.

Technology Category

Application Category

📝 Abstract
A rapidly alternating red and black display known as Ganzflicker induces visual hallucinations that reflect the generative capacity of the visual system. Recent proposals regarding the imagery spectrum, that is, differences in the visual system of individuals with absent imagery, typical imagery, and vivid imagery, suggest these differences should impact the complexity of other internally generated visual experiences. Here, we used tools from natural language processing to analyze free-text descriptions of hallucinations from over 4,000 participants, asking whether people with different imagery phenotypes see different things in their mind's eye during Ganzflicker-induced hallucinations. Strong imagers described complex, naturalistic content, while weak imagers reported simple geometric patterns. Embeddings from vision language models better captured these differences than text-only language models, and participants with stronger imagery used language with richer sensorimotor associations. These findings may reflect individual variation in coordination between early visual areas and higher-order regions relevant for the imagery spectrum.
Problem

Research questions and friction points this paper is trying to address.

Analyzes Ganzflicker-induced hallucinations across imagery phenotypes
Compares hallucination content between strong and weak imagers
Investigates neural coordination differences in visual imagery spectrum
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used NLP to analyze hallucination descriptions
Applied vision language models for content differences
Linked imagery strength to sensorimotor language richness
🔎 Similar Papers
No similar papers found.
A
Ana Chkhaidze
Department of Cognitive Science, University of California, San Diego (USA)
R
Reshanne R. Reeder
Department of Psychology, Institute of Population Health, University of Liverpool (UK)
C
Connor Gag
Department of Computer Science, University of California, San Diego (USA)
A
Anastasia Kiyonaga
Department of Cognitive Science, University of California, San Diego (USA)
Seana Coulson
Seana Coulson
Cognitive Science, UCSD
neurobiology of languagepragmaticsconcepts