π€ AI Summary
This work addresses the learnability determination problem for neural-symbolic (NeSy) tasks in hybrid systems. We model NeSy tasks as differentiable constraint satisfaction problems (DCSPs) and establish, for the first time, a formal learnability criterion: a task is learnable if and only if its associated DCSP admits a unique solution. Building upon the clustering structure of the hypothesis space, we derive an upper bound on the generalization error for learnable tasks and quantify the asymptotic errorβs dependence on the divergence degree of the solution space. Integrating formal learnability theory, DCSP modeling, and statistical learning analysis, our study delivers threefold theoretical contributions: (i) a decidable learnability criterion; (ii) a finite-sample generalization error bound; and (iii) an asymptotic scaling law governing error convergence. Collectively, these results provide a unified theoretical foundation for the design and analysis of NeSy algorithms.
π Abstract
This paper analyzes the learnability of neuro-symbolic (NeSy) tasks within hybrid systems. We show that the learnability of NeSy tasks can be characterized by their derived constraint satisfaction problems (DCSPs). Specifically, a task is learnable if the corresponding DCSP has a unique solution; otherwise, it is unlearnable. For learnable tasks, we establish error bounds by exploiting the clustering property of the hypothesis space. Additionally, we analyze the asymptotic error for general NeSy tasks, showing that the expected error scales with the disagreement among solutions. Our results offer a principled approach to determining learnability and provide insights into the design of new algorithms.