Mind the Gap: A Closer Look at Tokenization for Multiple-Choice Question Answering with LLMs

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In LLM evaluation on multiple-choice question answering (MCQA), the tokenization of the trailing space after the “Answer:” prompt—commonly overlooked—induces accuracy fluctuations of up to 11% and substantially alters model relative rankings, undermining the reliability of existing comparisons. Method: The authors systematically analyze how diverse tokenization strategies affect next-token probability decoding, focusing on the segmentation of the space preceding the answer letter (e.g., “A”). Contribution/Results: They find that merging the leading space with the answer token (e.g., encoding “ A” as a single token) improves confidence calibration and prediction consistency across multiple mainstream LLMs, yielding stable and statistically significant performance gains. This finding holds across diverse models and datasets. The study advocates formalizing tokenization conventions as a mandatory component of MCQA evaluation protocols to enhance transparency, reproducibility, and fairness in LLM benchmarking.

Technology Category

Application Category

📝 Abstract
When evaluating large language models (LLMs) with multiple-choice question answering (MCQA), it is common to end the prompt with the string "Answer:" to facilitate automated answer extraction via next-token probabilities. However, there is no consensus on how to tokenize the space following the colon, often overlooked as a trivial choice. In this paper, we uncover accuracy differences of up to 11% due to this (seemingly irrelevant) tokenization variation as well as reshuffled model rankings, raising concerns about the reliability of LLM comparisons in prior work. Surprisingly, we are able to recommend one specific strategy -- tokenizing the space together with the answer letter -- as we observe consistent and statistically significant performance improvements. Additionally, it improves model calibration, enhancing the reliability of the model's confidence estimates. Our findings underscore the importance of careful evaluation design and highlight the need for standardized, transparent evaluation protocols to ensure reliable and comparable results.
Problem

Research questions and friction points this paper is trying to address.

Tokenization variations affect MCQA accuracy up to 11%
Space tokenization impacts model ranking reliability
Standardized evaluation protocols needed for LLM comparisons
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tokenizing space with answer letter improves accuracy
Strategy enhances model calibration and confidence reliability
Standardized transparent evaluation protocols ensure comparable results
🔎 Similar Papers
No similar papers found.