🤖 AI Summary
To address poor interpretability and weak robustness in multi-instance partial-label learning (MI-PLL) caused by label ambiguity and uncertain instance-to-label mappings, this paper proposes the first formal semantic modeling framework for weakly supervised learning. We innovatively integrate inductive logic programming (ILP) into neural learning pipelines, constructing structured logical constraints to regularize the hypothesis space for label propagation and defining verifiable interpretability criteria. This neuro-symbolic approach ensures strict alignment between neural predictions and domain knowledge, achieving high accuracy while substantially enhancing model robustness and decision transparency. Experiments demonstrate state-of-the-art performance across multiple benchmark datasets and provide auditable, logically grounded inference justifications—establishing a foundation for trustworthy weakly supervised applications in high-stakes domains such as healthcare and law.
📝 Abstract
Weak supervision allows machine learning models to learn from limited or noisy labels, but it introduces challenges in interpretability and reliability - particularly in multi-instance partial label learning (MI-PLL), where models must resolve both ambiguous labels and uncertain instance-label mappings. We propose a semantics for neuro-symbolic framework that integrates Inductive Logic Programming (ILP) to improve MI-PLL by providing structured relational constraints that guide learning. Within our semantic characterization, ILP defines a logical hypothesis space for label transitions, clarifies classifier semantics, and establishes interpretable performance standards. This hybrid approach improves robustness, transparency, and accountability in weakly supervised settings, ensuring neural predictions align with domain knowledge. By embedding weak supervision into a logical framework, we enhance both interpretability and learning, making weak supervision more suitable for real-world, high-stakes applications.