🤖 AI Summary
Discrete tokens in autoregressive multimodal foundation models lack explicit metric structure, hindering their ability to preserve meaningful geometric, semantic, or spatial relationships during generation.
Method: This paper introduces DIST2Loss, a distance-aware training framework that converts pre-defined continuous metric distances between tokens—e.g., Euclidean, semantic, or spatial distances—into discrete classification targets. Leveraging exponential-family distribution modeling and vector-quantized feature interfaces, DIST2Loss implicitly enforces metric consistency in autoregressive token prediction without modifying the model architecture.
Contribution/Results: DIST2Loss significantly enhances multimodal distance awareness, yielding substantial performance gains across diverse tasks—including visual grounding, robotic manipulation, generative reward modeling, and VQ-based image generation—with particularly pronounced improvements in few-shot settings. Empirical results demonstrate consistent and robust enhancements in both alignment fidelity and downstream generalization, validating the efficacy of explicitly grounding discrete token prediction in continuous metric geometry.
📝 Abstract
As large language models expand beyond natural language to domains such as mathematics, multimodal understanding, and embodied agents, tokens increasingly reflect metric relationships rather than purely linguistic meaning. We introduce DIST2Loss, a distance-aware framework designed to train autoregressive discrete models by leveraging predefined distance relationships among output tokens. At its core, DIST2Loss transforms continuous exponential family distributions derived from inherent distance metrics into discrete, categorical optimization targets compatible with the models' architectures. This approach enables the models to learn and preserve meaningful distance relationships during token generation while maintaining compatibility with existing architectures. Empirical evaluations show consistent performance gains in diverse multimodal applications, including visual grounding, robotic manipulation, generative reward modeling, and image generation using vector-quantized features. These improvements are pronounced in cases of limited training data, highlighting DIST2Loss's effectiveness in resource-constrained settings.