π€ AI Summary
Large language models (LLMs) exhibit strong domain sensitivity and poor cross-task transferability in structured information extraction from specialized scientific domainsβe.g., neuroscience. To address this, we propose a modular, task-agnostic enhancement framework integrating ontology-guided prompting, a self-assessing referee agent, iterative feedback-driven refinement, and human-in-the-loop validation. This design enables precise, controllable extraction of structured knowledge from unstructured scientific literature. Compared to existing LLM-based approaches, our framework significantly improves expert-level semantic understanding, domain adaptability, and generalization capability. Extensive experiments across multiple neuroscience information extraction tasks demonstrate substantial reductions in domain dependency, superior transfer performance to unseen tasks and datasets, and practical applicability in real-world scientific curation workflows.
π Abstract
The ability to extract structured information from unstructured sources-such as free-text documents and scientific literature-is critical for accelerating scientific discovery and knowledge synthesis. Large Language Models (LLMs) have demonstrated remarkable capabilities in various natural language processing tasks, including structured information extraction. However, their effectiveness often diminishes in specialized, domain-specific contexts that require nuanced understanding and expert-level domain knowledge. In addition, existing LLM-based approaches frequently exhibit poor transferability across tasks and domains, limiting their scalability and adaptability. To address these challenges, we introduce StructSense, a modular, task-agnostic, open-source framework for structured information extraction built on LLMs. StructSense is guided by domain-specific symbolic knowledge encoded in ontologies, enabling it to navigate complex domain content more effectively. It further incorporates agentic capabilities through self-evaluative judges that form a feedback loop for iterative refinement, and includes human-in-the-loop mechanisms to ensure quality and validation. We demonstrate that StructSense can overcome both the limitations of domain sensitivity and the lack of cross-task generalizability, as shown through its application to diverse neuroscience information extraction tasks.