🤖 AI Summary
Pretrained language models exhibit limited cross-domain transferability and insufficient contextual adaptability in sequence labeling tasks. To address these challenges, this work proposes three key innovations: (1) a multi-task learning framework that incorporates external knowledge signals by jointly modeling auxiliary tasks such as event trigger detection; (2) a modified autoregressive large language model architecture enabling bidirectional inter-layer information flow to enhance local sequence awareness; and (3) a generative in-context learning paradigm for sequence labeling, supporting few-shot adaptation without parameter updates. Evaluated on cross-domain event detection, the approach achieves significant performance gains over strong baselines. Experimental results demonstrate that this targeted transfer learning paradigm effectively unlocks the potential of pretrained models for structured prediction tasks, improving both generalization across domains and contextual sensitivity in label assignment.
📝 Abstract
This doctoral thesis improves the transfer learning for sequence labeling tasks by adapting pre-trained neural language models. The proposed improvements in transfer learning involve introducing a multi-task model that incorporates an additional signal, a method based on architectural modifications in autoregressive large language models, and a sequence labeling framework for autoregressive large language models utilizing supervised in-context fine-tuning combined with response-oriented adaptation strategies. The first improvement is given in the context of domain transfer for the event trigger detection task. The domain transfer of the event trigger detection task can be improved by incorporating an additional signal obtained from a domain-independent text processing system into a multi-task model. The second improvement involves modifying the model's architecture. For that purpose, a method is proposed to enable bidirectional information flow across layers of autoregressive large language models. The third improvement utilizes autoregressive large language models as text generators through a generative supervised in-context fine-tuning framework. The proposed model, method, and framework demonstrate that pre-trained neural language models achieve their best performance on sequence labeling tasks when adapted through targeted transfer learning paradigms.