🤖 AI Summary
This work addresses the limitations of large language models in discriminative tasks—namely, high inference latency, computational overhead, and API costs—while noting that existing knowledge distillation approaches often overlook intermediate reasoning steps, hindering error diagnosis. To overcome these challenges, the authors propose the Graph-based Concept Predictor (GCP) framework, which explicitly models the teacher model’s reasoning process as a directed acyclic graph and aligns it with a modular student network. GCP introduces a graph-structure-aware uncertainty sampling strategy and a loss-attribution-driven local retraining mechanism, enabling reasoning-aware active distillation. Experiments across eight NLP classification benchmarks demonstrate that GCP significantly improves sample efficiency and performance under limited annotation budgets, while simultaneously enhancing model interpretability and training controllability.
📝 Abstract
Deploying Large Language Models (LLMs) for discriminative workloads is often limited by inference latency, compute, and API costs at scale. Active distillation reduces these costs by querying an LLM oracle to train compact discriminative students, but most pipelines distill only final labels, discarding intermediate reasoning signals and offering limited diagnostics of what reasoning is missing and where errors arise. We propose Graph of Concept Predictors (GCP), a reasoning-aware active distillation framework that externalizes the teacher's decision process as a directed acyclic graph and mirrors it with modular concept predictors in the student. GCP enhances sample efficiency through a graph-aware acquisition strategy that targets uncertainty and disagreement at critical reasoning nodes. Additionally, it improves training stability and efficiency by performing targeted sub-module retraining, which attributes downstream loss to specific concept predictors and updates only the most influential modules. Experiments on eight NLP classification benchmarks demonstrate that GCP enhances performance under limited annotation budgets while yielding more interpretable and controllable training dynamics. Code is available at: https://github.com/Ziyang-Yu/GCP.