🤖 AI Summary
This work addresses the theoretical ambiguity surrounding “linkage,” “building blocks,” and “problem decomposition” in model-driven genetic algorithms (GAs). We propose the first algorithm-agnostic mathematical definition of linkage and unify it with the PAC learning framework. Methodologically, we integrate graph-theoretic modeling, probabilistic learnability analysis, and a linkage learning algorithmic framework. We rigorously prove that problems with bounded linkage degree admit exact decomposition into minimal building blocks, and that the optimal solution within each block is PAC-learnable from polynomially many samples. Our core contribution is the first unified theoretical framework that jointly characterizes the effectiveness of problem decomposition, generalization capability, and computational feasibility—thereby providing rigorous learnability guarantees and a principled decomposition feasibility criterion for model-driven GAs.
📝 Abstract
The concepts of linkage, building blocks, and problem decomposition have long existed in the genetic algorithm (GA) field and have guided the development of model-based GAs for decades. However, their definitions are usually vague, making it difficult to develop theoretical support. This paper provides an algorithm-independent definition to describe the concept of linkage. With this definition, the paper proves that any problems with a bounded degree of linkage are decomposable and that proper problem decomposition is possible via linkage learning. The way of decomposition given in this paper also offers a new perspective on nearly decomposable problems with bounded difficulty and building blocks from the theoretical aspect. Finally, this paper relates problem decomposition to PAC learning and proves that the global optima of these problems and the minimum decomposition blocks are PAC learnable under certain conditions.