🤖 AI Summary
This study systematically evaluates the vulnerability of large language models (LLMs) to misleading instructions within multiple-choice question interfaces. The authors introduce the first standardized and scalable benchmark that integrates interface manipulation with instruction-based perturbations, injecting distractors across 16 distinct instruction categories—such as social conformity and reward/threat framing—into answer options. Leveraging a dataset of 3,000 questions spanning knowledge, reasoning, and commonsense domains, they assess 12 prominent LLMs, revealing widespread and significant susceptibility to such manipulations alongside notable disparities in robustness. The work further examines the efficacy of various reasoning and alignment-based mitigation strategies, providing empirical foundations for enhancing the robustness of instruction-following in LLMs.
📝 Abstract
Benchmarking large language models (LLMs) is critical for understanding their capabilities, limitations, and robustness. In addition to interface artifacts, prior studies have shown that LLM decisions can be influenced by directive signals such as social cues, framing, and instructions. In this work, we introduce option injection, a benchmarking approach that augments the multiple-choice question answering (MCQA) interface with an additional option containing a misleading directive, leveraging standardized choice structure and scalable evaluation. We construct OI-Bench, a benchmark of 3,000 questions spanning knowledge, reasoning, and commonsense tasks, with 16 directive types covering social compliance, bonus framing, threat framing, and instructional interference. This setting combines manipulation of the choice interface with directive-based interference, enabling systematic assessment of model susceptibility. We evaluate 12 LLMs to analyze attack success rates, behavioral responses, and further investigate mitigation strategies ranging from inference-time prompting to post-training alignment. Experimental results reveal substantial vulnerabilities and heterogeneous robustness across models. OI-Bench is expected to support more systematic evaluation of LLM robustness to directive interference within choice-based interfaces.