🤖 AI Summary
Existing conditional semantic text similarity (C-STS) approaches predominantly rely on discriminative models, which struggle to directly optimize the non-differentiable Spearman correlation—a standard ranking metric—and fail to effectively integrate large language models (LLMs) with reinforcement learning (RL). To address this, we propose PoLi-RL: the first end-to-end RL framework tailored for C-STS. It employs a pointwise-to-listwise two-stage curriculum learning strategy and introduces a Parallel Slicing Ranking Reward (PSRR) mechanism for fine-grained credit assignment. Crucially, PoLi-RL unifies pointwise, pairwise, and listwise reward objectives into a hybrid multi-level reward design. An LLM serves as the policy network, enabling direct optimization of Spearman correlation. Evaluated on the official C-STS benchmark, PoLi-RL achieves a new state-of-the-art Spearman coefficient of 48.18—surpassing all prior cross-encoder-based methods.
📝 Abstract
Conditional Semantic Textual Similarity (C-STS) measures the semantic proximity between text segments under a specific condition, thereby overcoming the ambiguity inherent in traditional STS. However, existing methods are largely confined to discriminative models, failing to fully integrate recent breakthroughs in the NLP community concerning Large Language Models (LLMs) and Reinforcement Learning (RL). RL is a particularly well-suited paradigm for this task, as it can directly optimize the non-differentiable Spearman ranking metric and guide the reasoning process required by C-STS. However, we find that naively applying listwise RL fails to produce meaningful improvements, as the model is overwhelmed by complex, coarse-grained reward signals. To address this challenge, we introduce PoLi-RL, a novel Point-to-List Reinforcement Learning framework. PoLi-RL employs a two-stage curriculum: it first trains the model with simple pointwise rewards to establish fundamental scoring capabilities, then transitions to a hybrid reward that combines pointwise, pairwise, and listwise objectives to refine the model's ability to discern subtle semantic distinctions. Crucially, we propose an innovative Parallel Slice Ranking Reward (PSRR) mechanism that computes ranking rewards in parallel slices, where each slice comprises same-indexed completions from different samples. This provides a precise, differentiated learning signal for each individual completion, enabling granular credit assignment and effective optimization. On the official C-STS benchmark, PoLi-RL achieves a Spearman correlation coefficient of 48.18, establishing a new SOTA for the cross-encoder architecture. As the first work to successfully apply RL to C-STS, our study introduces a powerful and precise paradigm for training LLMs on complex, ranking-based conditional judgment tasks.