OI-Bench: An Option Injection Benchmark for Evaluating LLM Susceptibility to Directive Interference

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the vulnerability of large language models (LLMs) to misleading instructions within multiple-choice question interfaces. The authors introduce the first standardized and scalable benchmark that integrates interface manipulation with instruction-based perturbations, injecting distractors across 16 distinct instruction categories—such as social conformity and reward/threat framing—into answer options. Leveraging a dataset of 3,000 questions spanning knowledge, reasoning, and commonsense domains, they assess 12 prominent LLMs, revealing widespread and significant susceptibility to such manipulations alongside notable disparities in robustness. The work further examines the efficacy of various reasoning and alignment-based mitigation strategies, providing empirical foundations for enhancing the robustness of instruction-following in LLMs.

Technology Category

Application Category

📝 Abstract
Benchmarking large language models (LLMs) is critical for understanding their capabilities, limitations, and robustness. In addition to interface artifacts, prior studies have shown that LLM decisions can be influenced by directive signals such as social cues, framing, and instructions. In this work, we introduce option injection, a benchmarking approach that augments the multiple-choice question answering (MCQA) interface with an additional option containing a misleading directive, leveraging standardized choice structure and scalable evaluation. We construct OI-Bench, a benchmark of 3,000 questions spanning knowledge, reasoning, and commonsense tasks, with 16 directive types covering social compliance, bonus framing, threat framing, and instructional interference. This setting combines manipulation of the choice interface with directive-based interference, enabling systematic assessment of model susceptibility. We evaluate 12 LLMs to analyze attack success rates, behavioral responses, and further investigate mitigation strategies ranging from inference-time prompting to post-training alignment. Experimental results reveal substantial vulnerabilities and heterogeneous robustness across models. OI-Bench is expected to support more systematic evaluation of LLM robustness to directive interference within choice-based interfaces.
Problem

Research questions and friction points this paper is trying to address.

directive interference
option injection
LLM robustness
multiple-choice question answering
benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Option Injection
Directive Interference
LLM Robustness
Multiple-Choice QA
Benchmarking
Y
Yow-Fu Liou
Department of Computer Science, National Yang Ming Chiao Tung University, Taiwan
Yu-Chien Tang
Yu-Chien Tang
National Yang Ming Chiao Tung University
Deep LearningMachine LearningNatural Language Processing
Y
Yu-Hsiang Liu
Department of Computer Science, National Yang Ming Chiao Tung University, Taiwan
An-Zi Yen
An-Zi Yen
National Yang Ming Chiao Tung University
Natural Language Processing