STRUCTSENSE: A Task-Agnostic Agentic Framework for Structured Information Extraction with Human-In-The-Loop Evaluation and Benchmarking

πŸ“… 2025-07-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) exhibit strong domain sensitivity and poor cross-task transferability in structured information extraction from specialized scientific domainsβ€”e.g., neuroscience. To address this, we propose a modular, task-agnostic enhancement framework integrating ontology-guided prompting, a self-assessing referee agent, iterative feedback-driven refinement, and human-in-the-loop validation. This design enables precise, controllable extraction of structured knowledge from unstructured scientific literature. Compared to existing LLM-based approaches, our framework significantly improves expert-level semantic understanding, domain adaptability, and generalization capability. Extensive experiments across multiple neuroscience information extraction tasks demonstrate substantial reductions in domain dependency, superior transfer performance to unseen tasks and datasets, and practical applicability in real-world scientific curation workflows.

Technology Category

Application Category

πŸ“ Abstract
The ability to extract structured information from unstructured sources-such as free-text documents and scientific literature-is critical for accelerating scientific discovery and knowledge synthesis. Large Language Models (LLMs) have demonstrated remarkable capabilities in various natural language processing tasks, including structured information extraction. However, their effectiveness often diminishes in specialized, domain-specific contexts that require nuanced understanding and expert-level domain knowledge. In addition, existing LLM-based approaches frequently exhibit poor transferability across tasks and domains, limiting their scalability and adaptability. To address these challenges, we introduce StructSense, a modular, task-agnostic, open-source framework for structured information extraction built on LLMs. StructSense is guided by domain-specific symbolic knowledge encoded in ontologies, enabling it to navigate complex domain content more effectively. It further incorporates agentic capabilities through self-evaluative judges that form a feedback loop for iterative refinement, and includes human-in-the-loop mechanisms to ensure quality and validation. We demonstrate that StructSense can overcome both the limitations of domain sensitivity and the lack of cross-task generalizability, as shown through its application to diverse neuroscience information extraction tasks.
Problem

Research questions and friction points this paper is trying to address.

Extracts structured data from unstructured sources efficiently
Overcomes domain-specific limitations in LLM-based extraction
Enhances cross-task adaptability with human-in-the-loop validation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular task-agnostic framework using LLMs
Domain knowledge guided by ontologies
Agentic self-evaluation with human feedback
Tek Raj Chhetri
Tek Raj Chhetri
Postdoc, MIT; Founder, CAIR-Nepal;
Knowledge GraphsPrivacyAIDistributed Systems
Yibei Chen
Yibei Chen
Massachusetts Institute of Technology
P
Puja Trivedi
McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
D
Dorota Jarecka
McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
S
Saif Haobsh
Fylo Labs Inc., New York, NY, USA
Patrick Ray
Patrick Ray
Allen Institute for Brain Science, Seattle, WA, USA
L
Lydia Ng
Allen Institute for Brain Science, Seattle, WA, USA
S
Satrajit S. Ghosh
McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA