A Cognac shot to forget bad memories: Corrective Unlearning in GNNs

📅 2024-12-01
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the problem that malicious or erroneous nodes in Graph Neural Networks (GNNs) propagate corrupted messages during neighborhood aggregation—thereby degrading model performance—this paper proposes Cognac, the first post-training, graph-structure-aware corrective unlearning method. Cognac requires neither the original training data nor full retraining; instead, it identifies only ∼5% of manipulated nodes and precisely mitigates their diffusive influence via message-passing path analysis and gradient compensation. Evaluated on multiple benchmark graph datasets, Cognac restores model performance close to that achieved by retraining from scratch on fully corrected data—significantly outperforming existing graph unlearning approaches. Moreover, it achieves an 8× speedup over full retraining while maintaining high accuracy. Cognac thus establishes the first solution for selective GNN unlearning that simultaneously delivers high precision, high efficiency, and minimal data and computational dependencies.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) are increasingly being used for a variety of ML applications on graph data. Because graph data does not follow the independently and identically distributed (i.i.d.) assumption, adversarial manipulations or incorrect data can propagate to other data points through message passing, which deteriorates the model's performance. To allow model developers to remove the adverse effects of manipulated entities from a trained GNN, we study the recently formulated problem of Corrective Unlearning. We find that current graph unlearning methods fail to unlearn the effect of manipulations even when the whole manipulated set is known. We introduce a new graph unlearning method, Cognac, which can unlearn the effect of the manipulation set even when only 5% of it is identified. It recovers most of the performance of a strong oracle with fully corrected training data, even beating retraining from scratch without the deletion set while being 8x more efficient. We hope our work assists GNN developers in mitigating harmful effects caused by issues in real-world data post-training. Our code is publicly available at https://github.com/varshitakolipaka/corrective-unlearning-for-gnns
Problem

Research questions and friction points this paper is trying to address.

Remove adversarial effects from trained GNNs
Corrective unlearning for manipulated graph data
Improve GNN performance with partial manipulation identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Corrective Unlearning for GNNs
Unlearns manipulations with partial data
Efficient performance recovery post-training
🔎 Similar Papers
No similar papers found.