Avoiding Copyright Infringement via Large Language Model Unlearning

📅 2024-06-16
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing unlearning methods lack support for sequential, time-sensitive copyright compliance requirements—such as repeated “removal requests”—and fail to address copyright infringement risks arising from pretraining data in large language models (LLMs). This work presents the first systematic study of sequential unlearning in copyright-sensitive scenarios and introduces Stable Sequential Unlearning (SSU), a novel framework. SSU enables reverse-update-based removal via parameter-level weight provenance tracking, augmented by stochastic label loss optimization and selective fine-tuning of target parameters to ensure both forgetting stability and knowledge retention. Evaluated across multiple benchmarks, SSU substantially outperforms prior approaches: it completely eliminates generation of infringing content while incurring less than 2.1% degradation in both language modeling and downstream task performance.

Technology Category

Application Category

📝 Abstract
Pre-trained Large Language Models (LLMs) have demonstrated remarkable capabilities but also pose risks by learning and generating copyrighted material, leading to significant legal and ethical concerns. In real-world scenarios, model owners need to continuously address copyright infringement as new requests for content removal emerge at different time points. This leads to the need for sequential unlearning, where copyrighted content is removed sequentially as new requests arise. Despite its practical relevance, sequential unlearning in the context of copyright infringement has not been rigorously explored in existing literature. To address this gap, we propose Stable Sequential Unlearning (SSU), a novel framework designed to unlearn copyrighted content from LLMs over multiple time steps. Our approach works by identifying and removing specific weight updates in the model's parameters that correspond to copyrighted content. We improve unlearning efficacy by introducing random labeling loss and ensuring the model retains its general-purpose knowledge by adjusting targeted parameters. Experimental results show that SSU achieves an effective trade-off between unlearning efficacy and general-purpose language abilities, outperforming existing baselines.
Problem

Research questions and friction points this paper is trying to address.

Prevents copyright infringement in LLMs
Enables sequential unlearning of copyrighted content
Balances unlearning efficacy and language capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sequential Unlearning for Copyright
Random Labeling Loss Enhancement
Targeted Parameter Adjustment Strategy
🔎 Similar Papers
No similar papers found.