🤖 AI Summary
To address the state explosion problem in NFA complementation caused by traditional determinization, this paper proposes an efficient complement construction that avoids full determinization. Our method comprises two key innovations: (1) an inverse powerset construction that reverse-simulates the powerset process underlying DFA complementation; and (2) two structure-aware symbolic transition techniques that exploit common NFA features—including ε-transitions, state sharing, and local determinism—to minimize intermediate automaton size. Semantic correctness is preserved throughout. Experimental evaluation on large-scale benchmarks demonstrates that our approach reduces the number of states in the complement automaton by 62%–89% on average compared to the classical subset-construction-then-complement method, while accelerating runtime by one to two orders of magnitude. These gains significantly enhance the practicality and scalability of NFA complementation in applications such as formal verification, regular expression negation, and program analysis.
📝 Abstract
Complementation of finite automata is a basic operation used in numerous applications. The standard way to complement a nondeterministic finite automaton (NFA) is to transform it into an equivalent deterministic finite automaton (DFA) and complement the DFA. The DFA can, however, be exponentially larger than the corresponding NFA. In this paper, we study several alternative approaches to complementation, which are based either on reverse powerset construction or on two novel constructions that exploit a commonly occurring structure of NFAs. Our experiment on a large data set shows that using a different than the classical approach can in many cases yield significantly smaller complements.