An Uncertainty-Driven Adaptive Self-Alignment Framework for Large Language Models

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of aligning large language models (LLMs) with human intent and safety norms under fully unsupervised conditions. Methodologically, it introduces the first uncertainty-driven adaptive self-alignment framework that quantifies uncertainty across three orthogonal dimensions—semantic coherence, factual consistency, and value alignment—via multi-response generation and dynamic preference sample construction, coupled with a staged reinforcement learning optimization strategy enabling progressive, annotation-free self-alignment. Empirically, the framework achieves significant improvements over state-of-the-art methods across four core metrics: harmlessness, helpfulness, truthfulness, and affect-controllable generation. Comprehensive evaluation demonstrates its effectiveness, robustness, and strong cross-task generalization capability. By eliminating reliance on human-labeled preference data, this work establishes a scalable, autonomous alignment paradigm for LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable progress in instruction following and general-purpose reasoning. However, achieving high-quality alignment with human intent and safety norms without human annotations remains a fundamental challenge. In this work, we propose an Uncertainty-Driven Adaptive Self-Alignment (UDASA) framework designed to improve LLM alignment in a fully automated manner. UDASA first generates multiple responses for each input and quantifies output uncertainty across three dimensions: semantics, factuality, and value alignment. Based on these uncertainty scores, the framework constructs preference pairs and categorizes training samples into three stages, conservative, moderate, and exploratory, according to their uncertainty difference. The model is then optimized progressively across these stages. In addition, we conduct a series of preliminary studies to validate the core design assumptions and provide strong empirical motivation for the proposed framework. Experimental results show that UDASA outperforms existing alignment methods across multiple tasks, including harmlessness, helpfulness, truthfulness, and controlled sentiment generation, significantly improving model performance.
Problem

Research questions and friction points this paper is trying to address.

Automating LLM alignment with human intent without annotations
Quantifying uncertainty in semantics, factuality, and value alignment
Improving alignment across harmlessness, helpfulness, and truthfulness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-driven adaptive self-alignment for LLMs
Automated multi-dimensional uncertainty quantification
Progressive optimization across uncertainty stages
🔎 Similar Papers
No similar papers found.