🤖 AI Summary
To address the scalability bottleneck in automated reasoning caused by inefficient CNF encodings of highly expressive feature modeling constructs—particularly cardinality constraints—this paper introduces a novel paradigm integrating pseudo-Boolean (PB) encoding with deterministic decomposable negation normal form (d-DNNF) compilation. We propose the first PB-based encoding for feature models and design a direct compilation pipeline from PB constraints to Boolean d-DNNF, enabling efficient support for expressive constraints within compilable logic representations. Experimental evaluation on diverse real-world feature model datasets demonstrates substantial improvements over CNF-based baselines: PB encoding is accelerated by several orders of magnitude, while d-DNNF compilation achieves 10–100× speedup. Crucially, the approach retains competitive performance on basic logical constructs. This work bridges the critical gap between expressive feature modeling and scalable, exact automated reasoning.
📝 Abstract
Configurable systems typically consist of reusable assets that have dependencies between each other. To specify such dependencies, feature models are commonly used. As feature models in practice are often complex, automated reasoning is typically employed to analyze the dependencies. Here, the de facto standard is translating the feature model to conjunctive normal form (CNF) to enable employing off-the-shelf tools, such as SAT or #SAT solvers. However, modern feature-modeling dialects often contain constructs, such as cardinality constraints, that are ill-suited for conversion to CNF. This mismatch between the input of reasoning engines and the available feature-modeling dialects limits the applicability of the more expressive constructs. In this work, we shorten this gap between expressive constructs and scalable automated reasoning. Our contribution is twofold: First, we provide a pseudo-Boolean encoding for feature models, which facilitates smaller representations of commonly employed constructs compared to Boolean encoding. Second, we propose a novel method to compile pseudo-Boolean formulas to Boolean d-DNNF. With the compiled d-DNNFs, we can resort to a plethora of efficient analyses already used in feature modeling. Our empirical evaluation shows that our proposal substantially outperforms the state-of-the-art based on CNF inputs for expressive constructs. For every considered dataset representing different feature models and feature-modeling constructs, the feature models can be significantly faster translated to pseudo-Boolean than to CNF. Overall, deriving d-DNNFs from a feature model with the targeted expressive constraints can be substantially accelerated using our pseudo-Boolean approach. Furthermore, our approach is competitive on feature models with only basic constructs.