π€ AI Summary
This work investigates the data-level origins of generalization from weak to strong models, focusing on the intrinsic mechanisms underlying βweak-to-strong generalization.β
Method: We introduce *overlap density*βthe proportion of samples simultaneously containing both easily learnable patterns (capturable by weak models) and hard-to-learn patterns (only accessible to strong models)βas a distributional measure of generalization potential. Based on this, we design a computable overlap-point detection algorithm and a theory-guided data selection strategy, integrating pattern co-occurrence modeling, generalization and regret bound analysis, and multi-source active querying.
Contribution/Results: We theoretically prove that generalization gain increases monotonically with overlap density. Extensive multi-task and multi-model experiments demonstrate that our approach significantly enhances strong-model performance under few-shot settings. The framework establishes a new paradigm for efficient learning and super-alignment, grounded in data distribution properties rather than model-centric heuristics.
π Abstract
The weak-to-strong generalization phenomenon is the driver for important machine learning applications including highly data-efficient learning and, most recently, performing superalignment. While decades of research have resulted in numerous algorithms that produce strong empirical performance, understanding what aspects of data enable weak-to-strong generalization has been understudied. We propose a simple data-centric mechanism that characterizes weak-to-strong generalization: the overlap density. Intuitively, generalization tracks the number of points that contain overlaps, i.e., both easy patterns (learnable by a weak model) and challenging patterns (only learnable by a stronger model), as with such points, weak predictions can be used to learn challenging patterns by stronger models. We provide a practical overlap detection algorithm to find such points in datasets and leverage them to learn, among multiple sources of data, which to query when seeking to maximize overlap density and thereby enhance weak-to-strong generalization. We present a theoretical result showing that the generalization benefit is a function of the overlap density and a regret bound for our data selection algorithm. Empirically, we validate the mechanism and the overlap detection algorithm on a wide array of settings.