Benchmarking and Improving LLM Robustness for Personalized Generation

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the imbalance between factual accuracy and user preference alignment in personalized generation by large language models (LLMs). We first systematically define and quantify “personalized generation robustness,” revealing a critical factuality–preference trade-off: e.g., GPT-4.1 exhibits a 5% factual error rate, while smaller models exceed 20%. To tackle this, we propose PERG—a unified evaluation framework—and its benchmark dataset PERGData, alongside Pref-Aligner, a two-stage optimization method for jointly improving factual consistency and preference fidelity. Through multi-model comparisons, diverse prompting strategies, and human-in-the-loop evaluation, we uncover how query types and preference categories modulate robustness. Experiments demonstrate that Pref-Aligner improves average robustness by 25% across mainstream LLMs, establishing a new benchmark and scalable paradigm for trustworthy, preference-aware generation.

Technology Category

Application Category

📝 Abstract
Recent years have witnessed a growing interest in personalizing the responses of large language models (LLMs). While existing evaluations primarily focus on whether a response aligns with a user's preferences, we argue that factuality is an equally important yet often overlooked dimension. In the context of personalization, we define a model as robust if its responses are both factually accurate and align with the user preferences. To assess this, we introduce PERG, a scalable framework for evaluating robustness in LLMs, along with a new dataset, PERGData. We evaluate fourteen models from five different model families using different prompting methods. Our findings show that current LLMs struggle with robust personalization: even the strongest models (GPT-4.1, LLaMA3-70B) fail to maintain correctness in 5% of previously successful cases without personalization, while smaller models (e.g., 7B-scale) can fail more than 20% of the time. Further analysis reveals that robustness is significantly affected by the nature of the query and the type of user preference. To mitigate these failures, we propose Pref-Aligner, a two-stage approach that improves robustness by an average of 25% across models. Our work highlights critical gaps in current evaluation practices and introduces tools and metrics to support more reliable, user-aligned LLM deployments.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM robustness for personalized generation balancing factuality and user preferences
Addressing significant failure rates in maintaining correctness during personalized responses
Improving robustness against query nature and user preference type variations
Innovation

Methods, ideas, or system contributions that make the work stand out.

PERG framework for evaluating LLM robustness
Pref-Aligner two-stage approach for improvement
Factuality and preference alignment as robustness metrics
🔎 Similar Papers
No similar papers found.
C
Chimaobi Okite
University of Michigan
Naihao Deng
Naihao Deng
The University of Michigan, Ann Arbor
Natural Language Processing
K
Kiran Bodipati
University of Michigan
H
Huaidian Hou
University of Michigan
J
Joyce Chai
University of Michigan
Rada Mihalcea
Rada Mihalcea
Professor of Computer Science, University of Michigan
Natural Language ProcessingComputational Social ScienceMultimodal Interaction