Improving Transfer Learning for Sequence Labeling Tasks by Adapting Pre-trained Neural Language Models

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretrained language models exhibit limited cross-domain transferability and insufficient contextual adaptability in sequence labeling tasks. To address these challenges, this work proposes three key innovations: (1) a multi-task learning framework that incorporates external knowledge signals by jointly modeling auxiliary tasks such as event trigger detection; (2) a modified autoregressive large language model architecture enabling bidirectional inter-layer information flow to enhance local sequence awareness; and (3) a generative in-context learning paradigm for sequence labeling, supporting few-shot adaptation without parameter updates. Evaluated on cross-domain event detection, the approach achieves significant performance gains over strong baselines. Experimental results demonstrate that this targeted transfer learning paradigm effectively unlocks the potential of pretrained models for structured prediction tasks, improving both generalization across domains and contextual sensitivity in label assignment.

Technology Category

Application Category

📝 Abstract
This doctoral thesis improves the transfer learning for sequence labeling tasks by adapting pre-trained neural language models. The proposed improvements in transfer learning involve introducing a multi-task model that incorporates an additional signal, a method based on architectural modifications in autoregressive large language models, and a sequence labeling framework for autoregressive large language models utilizing supervised in-context fine-tuning combined with response-oriented adaptation strategies. The first improvement is given in the context of domain transfer for the event trigger detection task. The domain transfer of the event trigger detection task can be improved by incorporating an additional signal obtained from a domain-independent text processing system into a multi-task model. The second improvement involves modifying the model's architecture. For that purpose, a method is proposed to enable bidirectional information flow across layers of autoregressive large language models. The third improvement utilizes autoregressive large language models as text generators through a generative supervised in-context fine-tuning framework. The proposed model, method, and framework demonstrate that pre-trained neural language models achieve their best performance on sequence labeling tasks when adapted through targeted transfer learning paradigms.
Problem

Research questions and friction points this paper is trying to address.

Adapting pre-trained language models for sequence labeling tasks
Improving domain transfer in event detection with multi-task learning
Enabling bidirectional information flow in autoregressive language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task model incorporates domain-independent signal for transfer learning
Bidirectional information flow enabled in autoregressive language models
Generative supervised in-context fine-tuning framework for sequence labeling
🔎 Similar Papers
No similar papers found.