ROKA: Robust Knowledge Unlearning against Adversaries

๐Ÿ“… 2026-02-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the critical challenge of knowledge pollution in existing machine unlearning methods, which often inadvertently degrade related knowledge and remain vulnerable to indirect unlearning attacksโ€”posing serious risks to model accuracy in safety-critical applications. To this end, we propose ROKA, a robust unlearning framework grounded in neural repair and a novel neural knowledge system theory. ROKA enables precise removal of target data while actively reinforcing its conceptual neighborhood, thereby achieving knowledge balance. Notably, ROKA provides the first theoretical guarantee for knowledge retention during unlearning and introduces a data-manipulation-free model of indirect unlearning attacks along with an effective defense mechanism. Experiments across Vision Transformers, multimodal models, and large language models demonstrate that ROKA not only efficiently executes unlearning tasks but also maintains or even improves accuracy on retained data while effectively resisting indirect attacks.

Technology Category

Application Category

๐Ÿ“ Abstract
The need for machine unlearning is critical for data privacy, yet existing methods often cause Knowledge Contamination by unintentionally damaging related knowledge. Such a degraded model performance after unlearning has been recently leveraged for new inference and backdoor attacks. Most studies design adversarial unlearning requests that require poisoning or duplicating training data. In this study, we introduce a new unlearning-induced attack model, namely indirect unlearning attack, which does not require data manipulation but exploits the consequence of knowledge contamination to perturb the model accuracy on security-critical predictions. To mitigate this attack, we introduce a theoretical framework that models neural networks as Neural Knowledge Systems. Based on this, we propose ROKA, a robust unlearning strategy centered on Neural Healing. Unlike conventional unlearning methods that only destroy information, ROKA constructively rebalances the model by nullifying the influence of forgotten data while strengthening its conceptual neighbors. To the best of our knowledge, our work is the first to provide a theoretical guarantee for knowledge preservation during unlearning. Evaluations on various large models, including vision transformers, multi-modal models, and large language models, show that ROKA effectively unlearns targets while preserving, or even enhancing, the accuracy of retained data, thereby mitigating the indirect unlearning attacks.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
knowledge contamination
indirect unlearning attack
data privacy
model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

machine unlearning
knowledge contamination
neural healing
indirect unlearning attack
neural knowledge systems
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jinmyeong Shin
University of California, Merced, USA
J
Joshua Tapia
University of California, Merced, USA
N
Nicholas Ferreira
California State University, East Bay, USA
G
Gabriel Diaz
University of California, Merced, USA
M
Moayed Daneshyari
California State University, East Bay, USA
Hyeran Jeon
Hyeran Jeon
Associate Professor, University of California Merced
Energy efficient and Reliable Computing