SeaPO: Strategic Error Amplification for Robust Preference Optimization of Large Language Models

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM preference optimization methods suffer from limited effectiveness due to low discriminability between positive and negative samples—especially when model scoring or generation capabilities are constrained. To address this, we propose a “Strategic Error Amplification” mechanism: it systematically identifies three canonical error patterns and controllably injects corresponding negative samples to significantly widen semantic and quality margins between positive and negative instances. Crucially, it explicitly models the error-type distribution as a guiding signal for preference learning—an innovation not previously explored. Our method integrates generative data augmentation with contrastive training, enabling multi-dimensional alignment optimization across 1.5B–14B parameter models. Experiments demonstrate consistent improvements across five core capability dimensions, with factual accuracy rising by 5–10 percentage points. Hybrid error injection further yields broad performance gains, balancing task-specificity and generalization.

Technology Category

Application Category

📝 Abstract
Existing alignment methods for preference optimization of large language models (LLMs) aim to enhance model performance by utilizing pairs of positive and negative samples. However, due to the limited capacity of models in scoring or generating responses, the quality of positive and negative samples may become similar during training, which complicates optimization for preference learning. To address this issue, we introduce SeaPO, a Strategic Error Amplification method that leverages three error types commonly occurring in LLMs to introduce specific error patterns into the model Preference Optimization. This strategy ensures that negative samples are more erroneous than positive samples and preference-based training is employed to mitigate the occurrence of these errors, thereby enhancing model performance. Evaluations across five capability dimensions and different model scales (1.5B to 14B) demonstrate that the generated data significantly improved overall model performance, particularly in terms of truthfulness, with improvements of 5-10 percentage points observed. Further analysis reveals that task performance varies depending on the error types introduced. Injecting the most common error types improves performance in related tasks, while a mix of error types leads to a broader performance enhancement: most tasks show stable improvements, while a few tasks exhibit significant gains.
Problem

Research questions and friction points this paper is trying to address.

Addresses preference optimization challenges with similar quality samples
Amplifies strategic error patterns to distinguish negative from positive samples
Enhances model performance across multiple dimensions and scales
Innovation

Methods, ideas, or system contributions that make the work stand out.

Amplifies strategic errors in preference optimization
Uses three common LLM error types for training
Enhances model truthfulness by 5-10 percentage points
🔎 Similar Papers
No similar papers found.
Jun Rao
Jun Rao
Harbin Institute of Technology (Shenzhen)
LLMsEfficient Post-trainingKnowledge DistillationMultimodal
Y
Yunjie Liao
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen
X
Xuebo Liu
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen
Z
Zepeng Lin
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen
L
Lian Lian
Huawei Cloud Computing Technologies Co., Ltd.
D
Dong Jin
Huawei Cloud Computing Technologies Co., Ltd.
S
Shengjun Cheng
Huawei Cloud Computing Technologies Co., Ltd.
J
Jun Yu
School of Intelligence Science and Engineering, Harbin Institute of Technology, Shenzhen
M
Min Zhang
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen