A Tale of LLMs and Induced Small Proxies: Scalable Agents for Knowledge Mining

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the tension between high deployment costs of large language models (LLMs) and poor generalization of traditional pipelines in large-scale knowledge mining, this paper proposes Falconer—a unified, instruction-driven framework that decomposes knowledge extraction into two atomic operations: “label acquisition” and “span extraction.” Falconer employs an LLM as both planner and annotator to automatically generate supervision signals and decompose complex instructions, enabling end-to-end training of lightweight proxy models. It realizes a trainable, collaborative pipeline architecture. Evaluated on diverse knowledge extraction tasks, Falconer retains over 90% of the optimal LLM’s instruction-following accuracy while reducing inference cost by 90% and increasing throughput 20×. It further introduces the first benchmark for proxy consistency. Key innovations include atomic task modeling and an LLM–proxy co-training paradigm, significantly enhancing scalability, generalization, and practicality of knowledge mining systems.

Technology Category

Application Category

📝 Abstract
At the core of Deep Research is knowledge mining, the task of extracting structured information from massive unstructured text in response to user instructions. Large language models (LLMs) excel at interpreting such instructions but are prohibitively expensive to deploy at scale, while traditional pipelines of classifiers and extractors remain efficient yet brittle and unable to generalize to new tasks. We introduce Falconer, a collaborative framework that combines the agentic reasoning of LLMs with lightweight proxy models for scalable knowledge mining. In Falconer, LLMs act as planners, decomposing user instructions into executable pipelines, and as annotators, generating supervision to train small proxies. The framework unifies classification and extraction into two atomic operations, get label and get span, enabling a single instruction-following model to replace multiple task-specific components. To evaluate the consistency between proxy models incubated by Falconer and annotations provided by humans and large models, we construct new benchmarks covering both planning and end-to-end execution. Experiments show that Falconer closely matches state-of-the-art LLMs in instruction-following accuracy while reducing inference cost by up to 90% and accelerating large-scale knowledge mining by more than 20x, offering an efficient and scalable foundation for Deep Research.
Problem

Research questions and friction points this paper is trying to address.

Scalable knowledge mining from unstructured text using LLMs
Reducing high inference costs of large language models
Replacing brittle traditional pipelines with unified atomic operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs plan pipelines and annotate for small proxies
Unifies classification and extraction into atomic operations
Reduces inference cost by 90% and accelerates mining 20x
🔎 Similar Papers
2023-08-22Frontiers Comput. Sci.Citations: 866