🤖 AI Summary
To address the scarcity of low-frequency entity annotations and poor few-shot performance in biomedical named entity recognition (NER), this paper proposes a hierarchical knowledge-guided data generation method. It explicitly models the semantic hierarchy of the UMLS knowledge base and leverages GPT-3.5 to generate contextually rich synthetic samples—enabling zero-shot, human-annotation-free data augmentation. The approach integrates hierarchical prompt engineering with BERT-Large and a domain-adversarial neural network (DANN) classifier into an ensemble enhancement strategy. Evaluated on four benchmark biomedical NER datasets, the BERT-Large ensemble achieves an average F1-score improvement of 42.29%, while DANN improves by 25.03%, substantially alleviating the bottleneck in sparse entity recognition. The core contribution lies in deeply coupling domain-specific knowledge graph hierarchies into the large language model (LLM) data generation process, establishing an interpretable and scalable knowledge-augmented paradigm for few-shot biomedical NER.
📝 Abstract
We present HILGEN, a Hierarchically-Informed Data Generation approach that combines domain knowledge from the Unified Medical Language System (UMLS) with synthetic data generated by large language models (LLMs), specifically GPT-3.5. Our approach leverages UMLS's hierarchical structure to expand training data with related concepts, while incorporating contextual information from LLMs through targeted prompts aimed at automatically generating synthetic examples for sparsely occurring named entities. The performance of the HILGEN approach was evaluated across four biomedical NER datasets (MIMIC III, BC5CDR, NCBI-Disease, and Med-Mentions) using BERT-Large and DANN (Data Augmentation with Nearest Neighbor Classifier) models, applying various data generation strategies, including UMLS, GPT-3.5, and their best ensemble. For the BERT-Large model, incorporating UMLS led to an average F1 score improvement of 40.36%, while using GPT-3.5 resulted in a comparable average increase of 40.52%. The Best-Ensemble approach using BERT-Large achieved the highest improvement, with an average increase of 42.29%. DANN model's F1 score improved by 22.74% on average using the UMLS-only approach. The GPT-3.5-based method resulted in a 21.53% increase, and the Best-Ensemble DANN model showed a more notable improvement, with an average increase of 25.03%. Our proposed HILGEN approach improves NER performance in few-shot settings without requiring additional manually annotated data. Our experiments demonstrate that an effective strategy for optimizing biomedical NER is to combine biomedical knowledge curated in the past, such as the UMLS, and generative LLMs to create synthetic training instances. Our future research will focus on exploring additional innovative synthetic data generation strategies for further improving NER performance.