🤖 AI Summary
Current supervised fine-tuning (SFT) of large language models (LLMs) relies on static datasets, failing to adapt to the model’s dynamic capability evolution and thus stagnating data quality. To address this, we propose Middo—the first model-aware dynamic data optimization framework—establishing a closed-loop “diagnose–optimize–retrain” pipeline for co-evolution of data and model. Middo innovatively leverages three orthogonal model signals: (i) loss pattern analysis to identify inefficient samples, (ii) embedding clustering to detect semantic biases, and (iii) self-alignment scoring to assess internal consistency. These signals jointly guide context-preserving data refinement and iterative optimization. Evaluated across multiple benchmarks, Middo achieves an average 7.15% accuracy improvement over strong baselines, significantly enhancing data quality without increasing dataset size.
📝 Abstract
Supervised Fine-Tuning (SFT) Large Language Models (LLM) fundamentally rely on high-quality training data. While data selection and data synthesis are two common strategies to improve data quality, existing approaches often face limitations in static dataset curation that fail to adapt to evolving model capabilities. In this paper, we introduce Middo, a self-evolving Model-informed dynamic data optimization framework that uses model-aware data selection and context-preserving data refinement. Unlike conventional one-off filtering/synthesis methods, our framework establishes a closed-loop optimization system: (1) A self-referential diagnostic module proactively identifies suboptimal samples through tri-axial model signals - loss patterns (complexity), embedding cluster dynamics (diversity), and self-alignment scores (quality); (2) An adaptive optimization engine then transforms suboptimal samples into pedagogically valuable training points while preserving semantic integrity; (3) This optimization process continuously evolves with model capability through dynamic learning principles. Experiments on multiple benchmarks demonstrate that our method consistently enhances the quality of seed data and boosts LLM's performance with improving accuracy by 7.15% on average while maintaining the original dataset scale. This work establishes a new paradigm for sustainable LLM training through dynamic human-AI co-evolution of data and models. Our datasets, models, and code are coming soon.