Uncovering Semantic Selectivity of Latent Groups in Higher Visual Cortex with Mutual Information-Guided Diffusion

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses how neural populations in high-level visual cortex—specifically macaque inferior temporal (IT) cortex—encode object-centered representations in a structured, semantically meaningful manner. To this end, we propose MIG-Vis: a novel framework integrating a joint variational autoencoder (to extract disentangled neural latent spaces) with a mutual information-guided diffusion model (to synthesize interpretable, semantically grounded images). MIG-Vis enables the first direct visualization of semantic selectivity within neural latent subspaces. Validated on multi-session macaque IT electrophysiological recordings, the method robustly identifies distinct latent neural groups encoding semantic dimensions—including object pose, cross-category transformations, and within-category fine-grained details. These findings reveal a hierarchically organized, semantics-driven representational architecture in high-level visual cortex. By bridging neural activity and interpretable semantic features in a fully data-driven manner, MIG-Vis establishes a new, interpretable paradigm for investigating the neural mechanisms underlying visual semantic coding.

Technology Category

Application Category

📝 Abstract
Understanding how neural populations in higher visual areas encode object-centered visual information remains a central challenge in computational neuroscience. Prior works have investigated representational alignment between artificial neural networks and the visual cortex. Nevertheless, these findings are indirect and offer limited insights to the structure of neural populations themselves. Similarly, decoding-based methods have quantified semantic features from neural populations but have not uncovered their underlying organizations. This leaves open a scientific question: "how feature-specific visual information is distributed across neural populations in higher visual areas, and whether it is organized into structured, semantically meaningful subspaces." To tackle this problem, we present MIG-Vis, a method that leverages the generative power of diffusion models to visualize and validate the visual-semantic attributes encoded in neural latent subspaces. Our method first uses a variational autoencoder to infer a group-wise disentangled neural latent subspace from neural populations. Subsequently, we propose a mutual information (MI)-guided diffusion synthesis procedure to visualize the specific visual-semantic features encoded by each latent group. We validate MIG-Vis on multi-session neural spiking datasets from the inferior temporal (IT) cortex of two macaques. The synthesized results demonstrate that our method identifies neural latent groups with clear semantic selectivity to diverse visual features, including object pose, inter-category transformations, and intra-class content. These findings provide direct, interpretable evidence of structured semantic representation in the higher visual cortex and advance our understanding of its encoding principles.
Problem

Research questions and friction points this paper is trying to address.

Reveals how neural populations encode object-centered visual information
Identifies structured semantic representations in higher visual cortex
Visualizes feature-specific information distribution across neural latent groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses variational autoencoder for neural latent subspaces
Applies mutual information-guided diffusion synthesis
Visualizes semantic selectivity in neural groups
🔎 Similar Papers
No similar papers found.