A Unifying Scheme for Extractive Content Selection Tasks

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Extractive content selection tasks—such as summarization, question answering, and keyword extraction—have historically been studied in isolation within NLP, lacking a unified modeling and evaluation framework. Method: We propose Instruction-Guided Content Selection (IGCS), the first framework enabling multi-task unified modeling: structured instructions explicitly encode task semantics and instance-specific requirements, endowing large language models with cross-task comprehension and generation capabilities; we introduce IGCSBench, the first unified benchmark covering seven diverse extractive selection tasks; and we design a transferable synthetic data generation strategy to enhance zero-shot and few-shot generalization. Results: Experiments demonstrate that IGCS significantly outperforms baselines both with and without task-specific training data, exhibiting strong robustness and generality in cross-task transfer and unified evaluation.

Technology Category

Application Category

📝 Abstract
A broad range of NLP tasks involve selecting relevant text spans from given source texts. Despite this shared objective, such extit{content selection} tasks have traditionally been studied in isolation, each with its own modeling approaches, datasets, and evaluation metrics. In this work, we propose extit{instruction-guided content selection (IGCS)} as a beneficial unified framework for such settings, where the task definition and any instance-specific request are encapsulated as instructions to a language model. To promote this framework, we introduce igcsbench{}, the first unified benchmark covering diverse content selection tasks. Further, we create a large generic synthetic dataset that can be leveraged for diverse content selection tasks, and show that transfer learning with these datasets often boosts performance, whether dedicated training for the targeted task is available or not. Finally, we address generic inference time issues that arise in LLM-based modeling of content selection, assess a generic evaluation metric, and overall propose the utility of our resources and methods for future content selection models. Models and datasets available at https://github.com/shmuelamar/igcs.
Problem

Research questions and friction points this paper is trying to address.

Unifying diverse extractive content selection NLP tasks
Creating a benchmark and synthetic dataset for content selection
Addressing inference issues in LLM-based content selection modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instruction-guided content selection framework
Unified benchmark for diverse tasks
Large synthetic dataset for transfer learning
🔎 Similar Papers
No similar papers found.