🤖 AI Summary
Existing quality filtering paradigms often erroneously discard valuable signals embedded in low-quality supervised fine-tuning (SFT) data. Method: We propose a neuro-symbolic collaborative framework for data purification and reconstruction. It employs a statistical-prior-driven symbolic rule module to remove noise and a neural reconstruction module—guided by model latent representations and domain knowledge—to generate high-quality instruction-response pairs. Crucially, this approach achieves superior performance using *only* low-quality data, without requiring any high-quality examples. Contribution/Results: On five mainstream instruction-following benchmarks, our method significantly outperforms 13 state-of-the-art data selection strategies. The enhanced dataset—constructed exclusively from low-quality data—surpasses the baseline model trained on ~300K raw, unfiltered samples, demonstrating that structurally reconstructed low-quality data attains higher information density and training efficacy.
📝 Abstract
Supervised Fine-Tuning (SFT) adapts pre-trained Large Language Models (LLMs) to domain-specific instructions by training on a carefully curated subset of high-quality instruction-response pairs, typically drawn from a larger dataset that often contains many low-quality or noisy samples. However, existing quality-first paradigms often overlook valuable signals in discarded low-quality data and rely on imperfect quality filters. We introduce ENTP (Enhancing low-quality SFT data via Neural-symbolic Text Purge-Mix), a framework that revitalizes low-quality corpora through symbolic purification and neural reconstruction. The symbolic module identifies and prunes noisy samples based on statistical priors, while the neural component synthesizes enriched instruction-response pairs by leveraging latent representations and model knowledge. This neural-symbolic synergy enhances data informativeness and diversity. Experiments show that ENTP-augmented datasets, constructed exclusively from low-quality data, outperform 13 established data-selection baselines across five instruction-following benchmarks, and even surpass fine-tuning on the full original dataset (approximately 300K examples). Our results highlight the untapped potential of low-quality data and underscore the importance of intelligent purification and synthesis for efficient instruction alignment.