🤖 AI Summary
This study investigates the parameterized complexity of Bayesian network structure learning (BNSL) under super-structure constraints. By integrating graph-theoretic parameters—such as feedback edge set size, local feedback edge set size, and treewidth—with input representation formats, particularly additive representations, the work systematically analyzes the fixed-parameter tractability of BNSL. The main contributions include the first proof that BNSL is fixed-parameter tractable when parameterized by the size of a feedback edge set, a result extended to local feedback edge sets. Furthermore, it establishes that under additive representations, treewidth alone suffices for fixed-parameter tractability—a finding that also applies to Polytree learning. The paper provides a complete complexity classification across mainstream graph parameters and derives corresponding conditional lower bounds, thereby significantly advancing the theoretical foundation of BNSL.
📝 Abstract
We investigate the parameterized complexity of Bayesian Network Structure Learning (BNSL), a classical problem that has received significant attention in empirical but also purely theoretical studies. We follow up on previous works that have analyzed the complexity of BNSL w.r.t. the so-called superstructure of the input. While known results imply that BNSL is unlikely to be fixed-parameter tractable even when parameterized by the size of a vertex cover in the superstructure, here we show that a different kind of parameterization - notably by the size of a feedback edge set - yields fixed-parameter tractability. We proceed by showing that this result can be strengthened to a localized version of the feedback edge set, and provide corresponding lower bounds that complement previous results to provide a complexity classification of BNSL w.r.t. virtually all well-studied graph parameters. We then analyze how the complexity of BNSL depends on the representation of the input. In particular, while the bulk of past theoretical work on the topic assumed the use of the so-called non-zero representation, here we prove that if an additive representation can be used instead then BNSL becomes fixed-parameter tractable even under significantly milder restrictions to the superstructure, notably when parameterized by the treewidth alone. Last but not least, we show how our results can be extended to the closely related problem of Polytree Learning.