A Formal Framework for the Explanation of Finite Automata Decisions

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of explaining why a finite automaton makes a specific decision on a given input word and how its output can be altered through minimal modifications. We formally define the notion of a minimal feature explanation for automaton decisions as the smallest set of critical input symbols responsible for the current acceptance or rejection outcome. To compute all such minimal explanations exactly, we propose an efficient algorithm that integrates formal methods, automata theory, and combinatorial optimization. Experimental evaluation demonstrates that our approach scales well in complex scenarios and consistently produces unbiased, accurate minimal explanations, thereby providing rigorous interpretability guarantees for automaton-based decisions.

Technology Category

Application Category

📝 Abstract
Finite automata (FA) are a fundamental computational abstraction that is widely used in practice for various tasks in computer science, linguistics, biology, electrical engineering, and artificial intelligence. Given an input word, an FA maps the word to a result, in the simple case"accept"or"reject", but in general to one of a finite set of results. A question that then arises is: why? Another question is: how can we modify the input word so that it is no longer accepted? One may think that the automaton itself is an adequate explanation of its behaviour, but automata can be very complex and difficult to make sense of directly. In this work, we investigate how to explain the behaviour of an FA on an input word in terms of the word's characters. In particular, we are interested in minimal explanations: what is the minimal set of input characters that explains the result, and what are the minimal changes needed to alter the result? In this paper, we propose an efficient method to determine all minimal explanations for the behaviour of an FA on a particular word. This allows us to give unbiased explanations about which input features are responsible for the result. Experiments show that our approach scales well, even when the underlying problem is challenging.
Problem

Research questions and friction points this paper is trying to address.

finite automata
explanation
minimal explanation
input word
decision reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

finite automata
minimal explanation
input attribution
formal explainability
automata behavior
🔎 Similar Papers
No similar papers found.
J
Jaime Cuartas Granada
Department of Data Science and AI, Faculty of IT, Monash University, Melbourne, Victoria, Australia
Alexey Ignatiev
Alexey Ignatiev
Associate Professor, Monash University
SatisfiabilityComputational LogicAutomated ReasoningArtificial IntelligenceExplainability
P
Peter J. Stuckey
Department of Data Science and AI, Faculty of IT, Monash University, Melbourne, Victoria, Australia