🤖 AI Summary
To address inequitable value distribution and misaligned responsibilities among domain experts—whose data contributions yield downstream benefits they cannot share, yet who bear copyright and privacy risks—this paper proposes PKUS, the first system to model professional knowledge as a verifiable, auditable, and isolated first-class entity. PKUS employs lightweight adapter encapsulation coupled with hardware-enforced Trusted Execution Environment (TEE) isolation, enabling physically separated yet coordinated sharded execution alongside GPU-based backbone models. Its innovations include TEE-GPU heterogeneous scheduling, multi-provider knowledge aggregation, a LoRA variant for knowledge encoding, and hardware-rooted lifecycle authentication. Evaluated on SST-2, MNLI, and SQuAD, PKUS matches full fine-tuning and standard LoRA in accuracy while achieving the lowest inference latency—accelerating over CPU-only TEE by 8.1–11.9×.
📝 Abstract
Future improvements in large language model (LLM) services increasingly hinge on access to high-value professional knowledge rather than more generic web data. However, the data providers of this knowledge face a skewed tradeoff between income and risk: they receive little share of downstream value yet retain copyright and privacy liability, making them reluctant to contribute their assets to LLM services. Existing techniques do not offer a trustworthy and controllable way to use professional knowledge, because they keep providers in the dark and combine knowledge parameters with the underlying LLM backbone.
In this paper, we present PKUS, the Professional Knowledge Utilization System, which treats professional knowledge as a first-class, separable artifact. PKUS keeps the backbone model on GPUs and encodes each provider's contribution as a compact adapter that executes only inside an attested Trusted Execution Environment (TEE). A hardware-rooted lifecycle protocol, adapter pruning, multi-provider aggregation, and split-execution scheduling together make this design practical at serving time. On SST-2, MNLI, and SQuAD with GPT-2 Large and Llama-3.2-1B, PKUS preserves model utility, matching the accuracy and F1 of full fine-tuning and plain LoRA, while achieving the lowest per-request latency with 8.1-11.9x speedup over CPU-only TEE inference and naive CPU-GPU co-execution.