π€ AI Summary
Dependency parsing aims to model syntactic dependency relations among words in a sentence. This paper proposes a purely prompt-driven, text-to-text dependency parsing approach that leverages only pretrained sequence-to-sequence encoders (e.g., T5 or BART), eliminating task-specific decoders and parameters. Our method introduces three key innovations: (1) a structured prompt template that explicitly encodes the topological constraints of dependency trees; (2) a linearized textual serialization of dependency structures; and (3) high-accuracy parsing without any parameter updatesβi.e., zero-parameter fine-tuning. Experiments across multilingual benchmarks demonstrate that our approach matches or surpasses state-of-the-art parsers in accuracy, while exhibiting strong cross-model and cross-lingual plug-and-play capability and zero-shot transfer performance. To our knowledge, this is the first work to empirically validate the effectiveness and generalizability of the pure prompting paradigm for syntactic parsing.
π Abstract
Dependency parsing is a fundamental task in natural language processing (NLP), aiming to identify syntactic dependencies and construct a syntactic tree for a given sentence. Traditional dependency parsing models typically construct embeddings and utilize additional layers for prediction. We propose a novel dependency parsing method that relies solely on an encoder model with a text-to-text training approach. To facilitate this, we introduce a structured prompt template that effectively captures the structural information of dependency trees. Our experimental results demonstrate that the proposed method achieves outstanding performance compared to traditional models, despite relying solely on a pre-trained model. Furthermore, this method is highly adaptable to various pre-trained models across different target languages and training environments, allowing easy integration of task-specific features.