🤖 AI Summary
To address catastrophic forgetting and concept drift in anti-money laundering (AML) systems caused by continual model fine-tuning, this paper proposes the first continual graph learning methodology framework tailored for financial transaction graphs. Methodologically, it establishes a three-category taxonomy of continual learning strategies for graph neural networks (GNNs)—replay-based, regularization-based, and architecture-evolution-based—and systematically evaluates their robustness under extreme class imbalance on both synthetic and real-world transaction graph datasets. The key contributions are: (i) the first adaptation of continual learning paradigms to AML graph modeling, uncovering critical mechanisms for mitigating forgetting; and (ii) empirical validation—via hyperparameter sensitivity analysis and comprehensive evaluation—that the framework significantly enhances model adaptability and detection stability over time. Results demonstrate that the proposed approach establishes an evolvable, forgetting-resistant intelligent detection paradigm for regulatory technology.
📝 Abstract
Financial institutions are required by regulation to report suspicious financial transactions related to money laundering. Therefore, they need to constantly monitor vast amounts of incoming and outgoing transactions. A particular challenge in detecting money laundering is that money launderers continuously adapt their tactics to evade detection. Hence, detection methods need constant fine-tuning. Traditional machine learning models suffer from catastrophic forgetting when fine-tuning the model on new data, thereby limiting their effectiveness in dynamic environments. Continual learning methods may address this issue and enhance current anti-money laundering (AML) practices, by allowing models to incorporate new information while retaining prior knowledge. Research on continual graph learning for AML, however, is still scarce. In this review, we critically evaluate state-of-the-art continual graph learning approaches for AML applications. We categorise methods into replay-based, regularization-based, and architecture-based strategies within the graph neural network (GNN) framework, and we provide in-depth experimental evaluations on both synthetic and real-world AML data sets that showcase the effect of the different hyperparameters. Our analysis demonstrates that continual learning improves model adaptability and robustness in the face of extreme class imbalances and evolving fraud patterns. Finally, we outline key challenges and propose directions for future research.