Divide-Verify-Refine: Can LLMs Self-Align with Complex Instructions?

📅 2024-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inconsistent responses and low constraint adherence of large language models (LLMs) under multi-constrained complex instructions—e.g., those specifying length, format, and sentiment—this paper proposes a three-stage “Decompose–Verify–Refine” framework. First, composite instructions are decomposed into atomic, single-constraint subtasks. Second, dedicated tools perform rigorous, automated verification against each constraint. Third, dynamic retrieval-augmented few-shot prompting drives targeted refinement. Key contributions include: (1) the first scalable paradigm for constraint decomposition and dynamic refinement; (2) the first benchmark dataset specifically designed for complex instruction following; and (3) a dual-track semantic-formal verification mechanism integrating a pre-trained classifier with a Python-based validation toolkit. On our curated benchmark, constraint adherence improves by 100% for Llama3.1-8B and 200% for Mistral-7B.

Technology Category

Application Category

📝 Abstract
Recent studies show LLMs struggle with complex instructions involving multiple constraints (e.g., length, format, sentiment). Existing works address this issue by fine-tuning, which heavily relies on fine-tuning data quality and is computational expensive. An alternative is leveraging LLMs' self-correction to refine responses for better constraint adherence. However, this is limited by the feedback quality, as LLMs cannot generate reliable feedback or detect errors. Moreover, its effectiveness relies on few-shot examples illustrating response modifications. As constraints in complex instructions are diverse, manually crafting such examples for each constraint type can be labor-intensive and sub-optimal. To address these two challenges, we propose the Divide-Verify-Refine (DVR) framework with three steps: (1) Divide complex instructions into single constraints and prepare appropriate tools; (2) Verify responses using tools that provide rigorous check and textual guidance (e.g., Python toolkit for format checks or pre-trained classifiers for content analysis); (3) Refine: To maximize refinement effectiveness, we propose dynamic few-shot prompting, where a refinement repository collects successful refinements, and these examples are selectively retrieved for future refinements. Recognizing the lack of complexity in existing datasets, we create a new dataset of complex instructions. DVR doubles Llama3.1-8B's constraint adherence and triples Mistral-7B's performance.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with complex multi-constraint instructions
Existing methods rely on costly fine-tuning or unreliable self-correction
Manual crafting of few-shot examples for diverse constraints is inefficient
Innovation

Methods, ideas, or system contributions that make the work stand out.

Divide complex instructions into single constraints
Verify responses using specialized tools for checks
Refine with dynamic few-shot prompting from repository
🔎 Similar Papers
No similar papers found.