🤖 AI Summary
Large language models (LLMs) exhibit poor robustness to semantically equivalent yet lexically diverse prompts, hindering real-world deployment. To address this, we propose LAP, a novel double-loop latent-space adversarial framework that—uniquely—models continuous, learnable perturbations as “implicit continuous paraphrasing” and enforces semantic consistency via Lagrangian semantic constraints. LAP integrates latent-space adversarial training, dual-loop parameter co-optimization, and semantics-aware optimization, achieving robustness gains without inference-time re-ranking or manual prompt engineering. Evaluated on the RobustAlpaca benchmark, LAP yields absolute worst-case win-rate improvements of 0.5–4% across multiple LLM architectures, substantially outperforming supervised fine-tuning. Our work establishes a new paradigm for prompt robustness research by unifying continuous perturbation learning with formal semantic invariance guarantees.
📝 Abstract
Insensitivity to semantically-preserving variations of prompts (paraphrases) is crucial for reliable behavior and real-world deployment of large language models. However, language models exhibit significant performance degradation when faced with semantically equivalent but differently phrased prompts, and existing solutions either depend on trial-and-error prompt engineering or require computationally expensive inference-time algorithms. In this study, built on the key insight that worst-case prompts exhibit a drift in embedding space, we present Latent Adversarial Paraphrasing (LAP), a dual-loop adversarial framework: the inner loop trains a learnable perturbation to serve as a"latent continuous paraphrase"while preserving semantics through Lagrangian regulation, and the outer loop optimizes the language model parameters on these perturbations. We conduct extensive experiments to demonstrate the effectiveness of LAP across multiple LLM architectures on the RobustAlpaca benchmark with a 0.5%-4% absolution improvement on worst-case win-rate compared with vanilla supervised fine-tuning.