🤖 AI Summary
This work addresses the limitations of decoder-only models in sequence labeling tasks, which stem from their autoregressive training and lack of bidirectional context. The authors propose a lightweight sequence repetition strategy that implicitly endows the model with bidirectional contextual information without altering the causal masking mechanism or architectural design. Key findings reveal that intermediate-layer embeddings under this strategy achieve performance comparable to or even surpassing that of final-layer representations, while being computationally more efficient. Moreover, moderately increasing the number of repetitions enhances rather than degrades performance. Experimental results demonstrate that the proposed method outperforms conventional encoder-based models and unmasked decoders across multiple sequence labeling benchmarks, substantially improving the token representation capacity and task adaptability of decoder-only architectures.
📝 Abstract
Modern language models (LMs) are trained in an autoregressive manner, conditioned only on the prefix. In contrast, sequence labeling (SL) tasks assign labels to each individual input token, naturally benefiting from bidirectional context. This discrepancy has historically led SL to rely on inherently bidirectional encoder-only models. However, the rapid development of decoder-only models has raised the question of whether they can be adapted to SL. While causal mask removal has emerged as a viable technique for adapting decoder-only models to leverage the full context for SL, it requires considerable changes to the base model functionality. In this work, we explore sequence repetition (SR) as a less invasive alternative for enabling bidirectionality in decoder-only models. Through fine-tuning experiments, we show that SR inherently makes decoders bidirectional, improving the quality of token-level embeddings and surpassing encoders and unmasked decoders. Contrary to earlier claims, we find that increasing the number of repetitions does not degrade SL performance. Finally, we demonstrate that embeddings from intermediate layers are highly effective for SR, comparable to those from final layers, while being significantly more efficient to compute. Our findings underscore that SR alleviates the structural limitations of decoders, enabling more efficient and adaptable LMs and broadening their applicability to other token-level tasks.