π€ AI Summary
In programmatic weak supervision, manually designed labeling functions (LFs) are error-prone and heavily reliant on domain expertise. Method: This paper proposes an automatic LF repair method leveraging only 5β20 labeled examples. It models LFs as conditional rules and jointly optimizes their individual accuracy and discriminative evidential sufficiency for weak labels. Under a minimal-modification constraint, it employs a satisfiability-driven optimization framework to selectively refine LFsβadjusting only their trigger logic or output, without redesigning them from scratch. Contribution/Results: Evaluated on multiple benchmark tasks, the method significantly improves both LF accuracy and downstream model performance, demonstrating effective and robust LF repair under ultra-low annotation cost.
π Abstract
Programmatic weak supervision (PWS) significantly reduces human effort for labeling data by combining the outputs of user-provided labeling functions (LFs) on unlabeled datapoints. However, the quality of the generated labels depends directly on the accuracy of the LFs. In this work, we study the problem of fixing LFs based on a small set of labeled examples. Towards this goal, we develop novel techniques for repairing a set of LFs by minimally changing their results on the labeled examples such that the fixed LFs ensure that (i) there is sufficient evidence for the correct label of each labeled datapoint and (ii) the accuracy of each repaired LF is sufficiently high. We model LFs as conditional rules which enables us to refine them, i.e., to selectively change their output for some inputs. We demonstrate experimentally that our system improves the quality of LFs based on surprisingly small sets of labeled datapoints.