🤖 AI Summary
This work proposes CF-HyperGNNExplainer, the first counterfactual explanation method tailored for Hypergraph Neural Networks (HGNNs), addressing their limited interpretability in high-stakes applications despite their ability to model high-order interactions. The method generates minimal structural perturbations—restricted to removing node-hyperedge incidences or entire hyperedges—to identify the critical high-order relationships responsible for a given prediction. By design, these explanations are actionable, structurally coherent, and concise. Experimental evaluation on three benchmark datasets demonstrates that CF-HyperGNNExplainer effectively uncovers the core hypergraph structures driving model decisions, thereby providing reliable and human-understandable insights into HGNN behavior.
📝 Abstract
Hypergraph neural networks (HGNNs) effectively model higher-order interactions in many real-world systems but remain difficult to interpret, limiting their deployment in high-stakes settings. We introduce CF-HyperGNNExplainer, a counterfactual explanation method for HGNNs that identifies the minimal structural changes required to alter a model's prediction. The method generates counterfactual hypergraphs using actionable edits limited to removing node-hyperedge incidences or deleting hyperedges, producing concise and structurally meaningful explanations. Experiments on three benchmark datasets show that CF-HyperGNNExplainer generates valid and concise counterfactuals, highlighting the higher-order relations most critical to HGNN decisions.