Just Noticeable Difference Modeling for Deep Visual Features

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of quantifying and controlling tolerable perturbations in deep visual features under resource-constrained settings to preserve downstream task performance. The authors propose FeatJND, the first method to introduce the concept of Just Noticeable Difference (JND) into deep feature space, establishing a task-aligned boundary for perturbation tolerance. By estimating normalized segmentation points, FeatJND computes a per-feature maximum tolerable perturbation map, enabling both feature importance visualization and token-level dynamic quantization. Experiments demonstrate that FeatJND-guided perturbations preserve task performance more effectively than Gaussian noise across image classification, object detection, and instance segmentation. Furthermore, under the same noise budget, its quantization strategy significantly outperforms random or globally uniform quantization baselines.

Technology Category

Application Category

📝 Abstract
Deep visual features are increasingly used as the interface in vision systems, motivating the need to describe feature characteristics and control feature quality for machine perception. Just noticeable difference (JND) characterizes the maximum imperceptible distortion for images under human or machine vision. Extending it to deep visual features naturally meets the above demand by providing a task-aligned tolerance boundary in feature space, offering a practical reference for controlling feature quality under constrained resources. We propose FeatJND, a task-aligned JND formulation that predicts the maximum tolerable per-feature perturbation map while preserving downstream task performance. We propose a FeatJND estimator at standardized split points and validate it across image classification, detection, and instance segmentation. Under matched distortion strength, FeatJND-based distortions consistently preserve higher task performance than unstructured Gaussian perturbations, and attribution visualizations suggest FeatJND can suppress non-critical feature regions. As an application, we further apply FeatJND to token-wise dynamic quantization and show that FeatJND-guided step-size allocation yields clear gains over random step-size permutation and global uniform step size under the same noise budget. Our code will be released after publication.
Problem

Research questions and friction points this paper is trying to address.

Just Noticeable Difference
Deep Visual Features
Feature Quality
Machine Perception
Task Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Just Noticeable Difference
Deep Visual Features
Task-aligned Tolerance
Feature Perturbation
Dynamic Quantization
🔎 Similar Papers
No similar papers found.