FIG: Forward-Inverse Generation for Low-Resource Domain-specific Event Detection

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In low-resource domains such as biomedicine and law, event detection faces challenges including label noise, domain shift, and incomplete event annotation in synthetic data. To address these issues, this paper proposes a forward–backward collaborative generation framework: the forward pass extracts domain-specific cues to anchor and constrain backward generation; it introduces a novel hybrid paradigm integrating trigger identification, event type prediction, and controllable template-based text generation; and it incorporates a forward-driven iterative annotation completion mechanism. Evaluated on three cross-domain datasets, the method achieves F1-score improvements of 3.3% (zero-shot) and 5.4% (few-shot) over baselines. Human evaluation and trigger hit rate analysis confirm significant gains in domain adaptability and synthetic data quality.

Technology Category

Application Category

📝 Abstract
Event Detection (ED) is the task of identifying typed event mentions of interest from natural language text, which benefits domain-specific reasoning in biomedical, legal, and epidemiological domains. However, procuring supervised data for thousands of events for various domains is a laborious and expensive task. To this end, existing works have explored synthetic data generation via forward (generating labels for unlabeled sentences) and inverse (generating sentences from generated labels) generations. However, forward generation often produces noisy labels, while inverse generation struggles with domain drift and incomplete event annotations. To address these challenges, we introduce FIG, a hybrid approach that leverages inverse generation for high-quality data synthesis while anchoring it to domain-specific cues extracted via forward generation on unlabeled target data. FIG further enhances its synthetic data by adding missing annotations through forward generation-based refinement. Experimentation on three ED datasets from diverse domains reveals that FIG outperforms the best baseline achieving average gains of 3.3% F1 and 5.4% F1 in the zero-shot and few-shot settings respectively. Analyzing the generated trigger hit rate and human evaluation substantiates FIG's superior domain alignment and data quality compared to existing baselines.
Problem

Research questions and friction points this paper is trying to address.

Improves event detection in low-resource domains
Addresses noisy labels in synthetic data generation
Enhances domain alignment and data quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid forward-inverse generation
Domain-specific cue anchoring
Forward generation-based refinement
🔎 Similar Papers
No similar papers found.
Tanmay Parekh
Tanmay Parekh
Student, University of California Los Angeles
Natural Language ProcessingMachine Learning
Y
Yuxuan Dong
Computer Science Department, University of California, Los Angeles
L
Lucas Bandarkar
Computer Science Department, University of California, Los Angeles
A
Artin Kim
Computer Science Department, University of California, Los Angeles
I
I-Hung Hsu
Google
K
Kai-Wei Chang
Computer Science Department, University of California, Los Angeles
N
Nanyun Peng
Computer Science Department, University of California, Los Angeles