🤖 AI Summary
Recursive self-improvement often induces alignment drift, compromising objective consistency, safety constraints, and performance stability. To address this, this work proposes the SAHOO framework, which, for the first time, formalizes alignment drift as a quantifiable metric and systematically monitors and mitigates it through three integrated mechanisms: a learning-based objective drift detector leveraging multiple signals (semantic, lexical, structural, and distributional), a verification module for preserving critical constraints, and a regression risk assessment driven by historical performance gains. Experiments across 189 tasks demonstrate that SAHOO significantly enhances performance—by 18.3% in code generation and 16.8% in mathematical reasoning—while effectively maintaining safety constraints such as syntactic correctness and hallucination mitigation, thereby revealing a nuanced trade-off between capability advancement and alignment preservation.
📝 Abstract
Recursive self-improvement is moving from theory to practice: modern systems can critique, revise, and evaluate their own outputs, yet iterative self-modification risks subtle alignment drift. We introduce SAHOO, a practical framework to monitor and control drift through three safeguards: (i) the Goal Drift Index (GDI), a learned multi-signal detector combining semantic, lexical, structural, and distributional measures; (ii) constraint preservation checks that enforce safety-critical invariants such as syntactic correctness and non-hallucination; and (iii) regression-risk quantification to flag improvement cycles that undo prior gains. Across 189 tasks in code generation, mathematical reasoning, and truthfulness, SAHOO produces substantial quality gains, including 18.3 percent improvement in code tasks and 16.8 percent in reasoning, while preserving constraints in two domains and maintaining low violations in truthfulness. Thresholds are calibrated on a small validation set of 18 tasks across three cycles. We further map the capability-alignment frontier, showing efficient early improvement cycles but rising alignment costs later and exposing domain-specific tensions such as fluency versus factuality. SAHOO therefore makes alignment preservation during recursive self-improvement measurable, deployable, and systematically validated at scale.