🤖 AI Summary
This study systematically uncovers the nontrivial impact of padding tokens on batched inference in large language models (LLMs). Addressing the widespread assumption that padding is benign, we conduct controlled empirical analysis across four dimensions—hidden-layer activations, generation quality, social bias, and safety alignment—using Llama, Gemma, and Qwen model families. By injecting varying amounts of padding into input sequences, we demonstrate that even minimal padding significantly perturbs internal representations, degrades coherence and factual consistency—especially in smaller models—and induces unpredictable fluctuations in bias metrics. Critically, standard safety mechanisms—including refusal behavior and content filtering—are substantially weakened under padding. This work is the first to elevate padding from an implementation detail to a critical factor affecting model robustness and trustworthy deployment. It provides both a rigorous diagnostic benchmark and urgent practical guidance for efficient, secure LLM engineering.
📝 Abstract
Padding tokens are widely used in large language models (LLMs) to equalize sequence lengths during batched inference. While they should be fully masked, implementation errors can cause them to influence computation, and the extent of this influence is not well understood. We systematically study this effect across three open-source model families (Llama, Gemma, Qwen), inserting controlled amounts of padding and evaluating outcomes along four axes: activations, generation quality, bias, and safety. Even small amounts of padding shift hidden representations, degrade quality in smaller models, alter bias in unpredictable ways, and weaken safety guardrails. These findings demonstrate that padding is not a harmless detail but a robustness risk that must be carefully handled in deployment.