🤖 AI Summary
This paper identifies a critical gap in explainability support for software developers building AI systems—specifically, the lack of debugging-oriented capabilities in existing XAI tools. Method: Through a mixed-methods study—including a large-scale developer survey, in-depth interviews, and thematic analysis—the authors systematically investigate practitioners’ real-world requirements for XAI tooling. Contribution/Results: The study empirically uncovers six key capability gaps from an engineering practice perspective, revealing a substantial misalignment between current XAI research and industrial development needs. Building on these findings, the paper proposes a novel workflow-centric paradigm for explainability support tailored to developers’ practices. It delivers empirically grounded design guidelines for next-generation, developer-friendly XAI systems, advancing the field from algorithmic interpretability toward engineering-level debuggability.
📝 Abstract
With artificial intelligence (AI) embedded in many everyday software systems, effectively and reliably developing and maintaining AI systems becomes an essential skill for software developers. However, the complexity inherent to AI poses new challenges. Explainable AI (XAI) may allow developers to understand better the systems they build, which, in turn, can help with tasks like debugging. In this paper, we report insights from a series of surveys with software developers that highlight that there is indeed an increased need for explanatory tools to support developers in creating AI systems. However, the feedback also indicates that existing XAI systems still fall short of this aspiration. Thus, we see an unmet need to provide developers with adequate support mechanisms to cope with this complexity so they can embed AI into high-quality software in the future.