π€ AI Summary
Text embedding models exhibit systematic mean biasβoutputs decompose into a sentence-invariant bias component and an effective semantic component. Method: We propose a plug-and-play, training-free renormalization technique grounded in theoretical analysis and empirical validation, demonstrating that vector projection subtraction outperforms direct mean subtraction for bias correction. The method requires only post-hoc processing of pretrained embeddings. Contribution/Results: Evaluated on 38 mainstream multilingual embedding models, it improves retrieval performance on the MTEB benchmark by 9.7Ο, classification by 3.1Ο, and other tasks by an average of 0.8Ο. This work is the first to uncover the structural commonality of embedding bias across models and establishes an interpretable, general-purpose, training-free paradigm for enhancing embedding quality and semantic fidelity.
π Abstract
We find that current text embedding models produce outputs with a consistent bias, i.e., each embedding vector $e$ can be decomposed as $ ilde{e} + mu$, where $mu$ is almost identical across all sentences. We propose a plug-and-play, training-free and lightweight solution called Renormalization. Through extensive experiments, we show that renormalization consistently and statistically significantly improves the performance of existing models on the Massive Multilingual Text Embedding Benchmark (MMTEB). In particular, across 38 models, renormalization improves performance by 9.7 $sigma$ on retrieval tasks, 3.1 $sigma$ on classification tasks, and 0.8 $sigma$ on other types of tasks. Renormalization has two variants: directly subtracting $mu$ from $e$, or subtracting the projection of $e$ onto $mu$. We theoretically predict that the latter performs better, and our experiments confirm this prediction.