EcoAlign: An Economically Rational Framework for Efficient LVLM Alignment

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large vision-language models (LVLMs) exhibit jailbreaking vulnerabilities, and existing alignment methods struggle to jointly optimize safety, utility, and computational efficiency—particularly “process-blind” strategies, which are easily circumvented by malicious reasoning. To address this, we propose a **process-aware, economically rational alignment framework**, modeling safety alignment as inference-time bounded-rational search. Our method introduces forward value estimation–guided progressive thought-graph expansion, weakest-link path pruning, dynamic net present value scoring, and path-level safety constraints. Extensive experiments across three closed-source and two open-source LVLMs, evaluated on six benchmark datasets, demonstrate that our approach maintains or improves both safety and task performance while significantly reducing inference computational overhead. Notably, it is the first method to explicitly optimize the triadic trade-off among safety, utility, and cost.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) exhibit powerful reasoning capabilities but suffer sophisticated jailbreak vulnerabilities. Fundamentally, aligning LVLMs is not just a safety challenge but a problem of economic efficiency. Current alignment methods struggle with the trade-off between safety, utility, and operational costs. Critically, a focus solely on final outputs (process-blindness) wastes significant computational budget on unsafe deliberation. This flaw allows harmful reasoning to be disguised with benign justifications, thereby circumventing simple additive safety scores. To address this, we propose EcoAlign, an inference-time framework that reframes alignment as an economically rational search by treating the LVLM as a boundedly rational agent. EcoAlign incrementally expands a thought graph and scores actions using a forward-looking function (analogous to net present value) that dynamically weighs expected safety, utility, and cost against the remaining budget. To prevent deception, path safety is enforced via the weakest-link principle. Extensive experiments across 3 closed-source and 2 open-source models on 6 datasets show that EcoAlign matches or surpasses state-of-the-art safety and utility at a lower computational cost, thereby offering a principled, economical pathway to robust LVLM alignment.
Problem

Research questions and friction points this paper is trying to address.

Addressing trade-offs between safety, utility, and operational costs in LVLM alignment
Preventing computational waste on unsafe deliberation through process-aware methods
Mitigating jailbreak vulnerabilities where harmful reasoning disguises as benign justifications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Economically rational search framework for LVLM alignment
Forward-looking function weighing safety utility cost
Weakest-link principle enforcing path safety against deception
🔎 Similar Papers
No similar papers found.
R
Ruoxi Cheng
Alibaba Group
Haoxuan Ma
Haoxuan Ma
University of California, Los Angeles
Intelligent Transportation SystemsMachine LearningAutomated Vehicle
T
Teng Ma
Sun Yat-Sen University
H
Hongyi Zhang
Nanyang Technological University