Neuro-symbolic Weak Supervision: Theory and Semantics

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor interpretability and weak robustness in multi-instance partial-label learning (MI-PLL) caused by label ambiguity and uncertain instance-to-label mappings, this paper proposes the first formal semantic modeling framework for weakly supervised learning. We innovatively integrate inductive logic programming (ILP) into neural learning pipelines, constructing structured logical constraints to regularize the hypothesis space for label propagation and defining verifiable interpretability criteria. This neuro-symbolic approach ensures strict alignment between neural predictions and domain knowledge, achieving high accuracy while substantially enhancing model robustness and decision transparency. Experiments demonstrate state-of-the-art performance across multiple benchmark datasets and provide auditable, logically grounded inference justifications—establishing a foundation for trustworthy weakly supervised applications in high-stakes domains such as healthcare and law.

Technology Category

Application Category

📝 Abstract
Weak supervision allows machine learning models to learn from limited or noisy labels, but it introduces challenges in interpretability and reliability - particularly in multi-instance partial label learning (MI-PLL), where models must resolve both ambiguous labels and uncertain instance-label mappings. We propose a semantics for neuro-symbolic framework that integrates Inductive Logic Programming (ILP) to improve MI-PLL by providing structured relational constraints that guide learning. Within our semantic characterization, ILP defines a logical hypothesis space for label transitions, clarifies classifier semantics, and establishes interpretable performance standards. This hybrid approach improves robustness, transparency, and accountability in weakly supervised settings, ensuring neural predictions align with domain knowledge. By embedding weak supervision into a logical framework, we enhance both interpretability and learning, making weak supervision more suitable for real-world, high-stakes applications.
Problem

Research questions and friction points this paper is trying to address.

Resolves ambiguous labels in multi-instance partial learning
Integrates ILP for structured relational constraints in learning
Enhances interpretability and reliability in weak supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Inductive Logic Programming for MI-PLL
Defines logical hypothesis space for labels
Enhances interpretability with hybrid framework