🤖 AI Summary
This work addresses the insufficient synergy between cube learning and dependency schemes in QCDCL proof systems. We formally define an enhanced QCDCL system that natively integrates cube learning with full dependency schemes—specifically Dstd and Drrs—for the first time. We establish sufficient conditions ensuring soundness and completeness, and prove theoretically that, under relaxed decision orders, both Dstd and Drrs provably reduce refutation length for false QBFs. Empirical evaluation demonstrates significant improvements in variable propagation and decision efficiency, leading to shorter proofs and accelerated QBF solving. The core innovation lies in establishing a tight coupling mechanism between cube learning and dependency schemes, yielding a new paradigm for quantified Boolean satisfiability that unifies theoretical rigor with practical efficiency.
📝 Abstract
Quantified Conflict Driven Clause Leaning (QCDCL) is one of the main approaches to solving Quantified Boolean Formulas (QBF). Cube-learning is employed in this approach to ensure that true formulas can be verified. Dependency Schemes help to detect spurious dependencies that are implied by the variable ordering in the quantifier prefix of QBFs but are not essential for constructing (counter)models. This detection can provably shorten refutations in specific proof systems, and is expected to speed up runs of QBF solvers.
The simplest underlying proof system [BeyersdorffBöhm-LMCS2023], formalises the reasoning in the QCDCL approach on false formulas, when neither cube learning nor dependency schemes is used. The work of [BöhmPeitlBeyersdorff-AI2024] further incorporates cube-learning. The work of [ChoudhuryMahajan-JAR2024] incorporates a limited use of dependency schemes, but without cube-learning.
In this work, proof systems underlying the reasoning of QCDCL solvers which use cube learning, and which use dependency schemes at all stages, are formalised. Sufficient conditions for soundness and completeness are presented, and it is shown that using the standard and reflexive resolution path dependency schemes ($D^{std}$ and $D^{rrs}$) to relax the decision order provably shortens refutations.
When the decisions are restricted to follow quantification order, but dependency schemes are used in propagation and learning, in conjunction with cube-learning, the resulting proof systems using the dependency schemes $D^{std}$ and $D^{rrs}$ are investigated in detail and their relative strengths are analysed.