🤖 AI Summary
This paper challenges the prevailing view in neuro-symbolic AI that conditional independence inherently induces deterministic bias. It identifies that the co-occurrence of conditional independence among random variables and deterministic preferences over solution spaces stems not from inherent flaws in probabilistic structure, but from improper injection of logical constraints.
Method: The authors propose a neuro-symbolic coupling framework integrating softmax outputs with a sparse logical constraint graph, supported by probabilistic distribution analysis and counterfactual validation.
Contribution/Results: This work provides the first systematic refutation of the “harmful conditional independence” hypothesis. Experiments demonstrate that, when logical constraints are correctly imposed, deterministic bias is eliminated, enabling uniform sampling over solution spaces and robust reasoning across multiple benchmark tasks. The key contribution lies in disentangling the root cause of bias—establishing constraint modeling, rather than probabilistic architecture, as the critical design dimension for mitigating deterministic bias in neuro-symbolic systems.
📝 Abstract
A popular approach to neurosymbolic AI is to take the output of the last layer of a neural network, e.g. a softmax activation, and pass it through a sparse computation graph encoding certain logical constraints one wishes to enforce. This induces a probability distribution over a set of random variables, which happen to be conditionally independent of each other in many commonly used neurosymbolic AI models. Such conditionally independent random variables have been deemed harmful as their presence has been observed to co-occur with a phenomenon dubbed deterministic bias, where systems learn to deterministically prefer one of the valid solutions from the solution space over the others. We provide evidence contesting this conclusion and show that the phenomenon of deterministic bias is an artifact of improperly applying neurosymbolic AI.