Feature Space Analysis by Guided Diffusion Model

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Poor semantic interpretability of DNN feature spaces hinders precise localization of correspondences between image attributes and neural activations. To address this, we propose a plug-and-play, fine-tuning-free feature inversion framework that leverages the reverse generative process of a pre-trained diffusion model: given a target feature vector, it optimizes the generated image via Euclidean distance minimization to achieve accurate feature-to-image decoding. Our method is architecture-agnostic, supporting mainstream vision models—including CLIP, ResNet-50, and ViT—without additional training. It enables efficient, high-fidelity visualization of semantically meaningful patterns encoded by these models (e.g., texture, shape, class-discriminative features), thereby exposing their internal representational structure. Experiments demonstrate strong alignment between inverted and target features, with cosine similarity exceeding 0.92. This establishes a scalable, high-fidelity paradigm for DNN interpretability analysis.

Technology Category

Application Category

📝 Abstract
One of the key issues in Deep Neural Networks (DNNs) is the black-box nature of their internal feature extraction process. Targeting vision-related domains, this paper focuses on analysing the feature space of a DNN by proposing a decoder that can generate images whose features are guaranteed to closely match a user-specified feature. Owing to this guarantee that is missed in past studies, our decoder allows us to evidence which of various attributes in an image are encoded into a feature by the DNN, by generating images whose features are in proximity to that feature. Our decoder is implemented as a guided diffusion model that guides the reverse image generation of a pre-trained diffusion model to minimise the Euclidean distance between the feature of a clean image estimated at each step and the user-specified feature. One practical advantage of our decoder is that it can analyse feature spaces of different DNNs with no additional training and run on a single COTS GPU. The experimental results targeting CLIP's image encoder, ResNet-50 and vision transformer demonstrate that images generated by our decoder have features remarkably similar to the user-specified ones and reveal valuable insights into these DNNs' feature spaces.
Problem

Research questions and friction points this paper is trying to address.

Analyzing black-box feature extraction in deep neural networks
Generating images matching user-specified feature vectors
Revealing visual attributes encoded in DNN feature spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Guided diffusion model for feature matching
Decoder generates images with specified features
Analyzes DNN feature spaces without additional training
🔎 Similar Papers
No similar papers found.
Kimiaki Shirahama
Kimiaki Shirahama
Doshisha University
Multimedia retrievalMachine learningData miningHuman activity recognition
M
Miki Yanobu
Department of Information Systems Design, Doshisha University, Japan
K
Kaduki Yamashita
Department of Information Systems Design, Doshisha University, Japan
M
Miho Ohsaki
Department of Information Systems Design, Doshisha University, Japan