🤖 AI Summary
This paper investigates whether local regularization algorithm templates are universally sufficient for learning all learnable multiclass classification problems in the transductive learning model. Method: The authors construct a novel counterexample—a problem that is PAC-learnable and transductively learnable, yet provably not learnable by any local regularization method in the transductive setting. Crucially, this counterexample is the first to be built upon cryptographic secret-sharing principles, enabling a rigorous lower-bound proof. Contribution/Results: The result establishes an inherent limitation of local regularization in transductive learning, refuting its universality and exposing fundamental structural constraints of the framework. Moreover, it provides the first concrete evidence of a learnability separation between the PAC and transductive models, opening new avenues for understanding the boundaries of transductive learnability.
📝 Abstract
We partly resolve an open question raised by Asilis et al. (COLT 2024): whether the algorithmic template of local regularization -- an intriguing generalization of explicit regularization, a.k.a. structural risk minimization -- suffices to learn all learnable multiclass problems. Specifically, we provide a negative answer to this question in the transductive model of learning. We exhibit a multiclass classification problem which is learnable in both the transductive and PAC models, yet cannot be learned transductively by any local regularizer. The corresponding hypothesis class, and our proof, are based on principles from cryptographic secret sharing. We outline challenges in extending our negative result to the PAC model, leaving open the tantalizing possibility of a PAC/transductive separation with respect to local regularization.