🤖 AI Summary
This work addresses the challenge of learning visual preferences from multimodal data without human annotations or external supervision. To this end, the authors propose a lightweight preference learning framework that automatically constructs preference supervision signals by generating perceptual discrepancies through image quality perturbations. The framework supports both labeled and unlabeled training regimes and eliminates the need for manual annotation. It is broadly applicable across diverse types of visual degradations and model scales, and serves as a viable alternative to rejection sampling-based fine-tuning. Experimental results demonstrate that the proposed method significantly outperforms existing approaches on multiple multimodal benchmarks, effectively enhancing model generalization.
📝 Abstract
We present VisualDeltas, a lightweight preference-learning framework that extracts supervision from visual quality variations in multimodal data. By leveraging the systematic impact of image quality on visual perception and reasoning, VisualDeltas induces informative preference signals without relying on human annotations or external teachers. The framework supports both label-free and label-based regimes, enabling flexible use of available supervision when present. Across diverse multimodal benchmarks and model scales, VisualDeltas consistently outperforms rejection-sampling fine-tuning and improves generalization, and extends naturally to a range of visual degradations.