Annotating Training Data for Conditional Semantic Textual Similarity Measurement using Large Language Models

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conditional Semantic Textual Similarity (C-STS) has long suffered from scarcity of large-scale, high-quality labeled data, compounded by ambiguous conditional descriptions and noisy similarity annotations in existing datasets. Method: This paper proposes the first LLM-based automated data cleaning and re-annotation framework for C-STS. It jointly refines conditional statements and recalibrates fine-grained similarity scores via LLM reasoning, requiring only minimal human verification to achieve high-confidence dataset reconstruction. Contribution/Results: The resulting high-quality, large-scale re-annotated dataset substantially improves model training: the trained C-STS model achieves a 5.4% absolute gain in Spearman correlation on standard benchmarks, outperforming all baselines. This work establishes a scalable data augmentation paradigm for semantic similarity modeling under low-resource conditions.

Technology Category

Application Category

📝 Abstract
Semantic similarity between two sentences depends on the aspects considered between those sentences. To study this phenomenon, Deshpande et al. (2023) proposed the Conditional Semantic Textual Similarity (C-STS) task and annotated a human-rated similarity dataset containing pairs of sentences compared under two different conditions. However, Tu et al. (2024) found various annotation issues in this dataset and showed that manually re-annotating a small portion of it leads to more accurate C-STS models. Despite these pioneering efforts, the lack of large and accurately annotated C-STS datasets remains a blocker for making progress on this task as evidenced by the subpar performance of the C-STS models. To address this training data need, we resort to Large Language Models (LLMs) to correct the condition statements and similarity ratings in the original dataset proposed by Deshpande et al. (2023). Our proposed method is able to re-annotate a large training dataset for the C-STS task with minimal manual effort. Importantly, by training a supervised C-STS model on our cleaned and re-annotated dataset, we achieve a 5.4% statistically significant improvement in Spearman correlation. The re-annotated dataset is available at https://LivNLP.github.io/CSTS-reannotation.
Problem

Research questions and friction points this paper is trying to address.

Correcting annotation errors in C-STS dataset
Generating accurate training data using LLMs
Improving supervised model performance for semantic similarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs correct condition statements and ratings
Re-annotate large dataset with minimal manual effort
Cleaned dataset improves supervised C-STS model performance
🔎 Similar Papers
No similar papers found.