Trustworthy and Controllable Professional Knowledge Utilization in Large Language Models with TEE-GPU Execution

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inequitable value distribution and misaligned responsibilities among domain experts—whose data contributions yield downstream benefits they cannot share, yet who bear copyright and privacy risks—this paper proposes PKUS, the first system to model professional knowledge as a verifiable, auditable, and isolated first-class entity. PKUS employs lightweight adapter encapsulation coupled with hardware-enforced Trusted Execution Environment (TEE) isolation, enabling physically separated yet coordinated sharded execution alongside GPU-based backbone models. Its innovations include TEE-GPU heterogeneous scheduling, multi-provider knowledge aggregation, a LoRA variant for knowledge encoding, and hardware-rooted lifecycle authentication. Evaluated on SST-2, MNLI, and SQuAD, PKUS matches full fine-tuning and standard LoRA in accuracy while achieving the lowest inference latency—accelerating over CPU-only TEE by 8.1–11.9×.

Technology Category

Application Category

📝 Abstract
Future improvements in large language model (LLM) services increasingly hinge on access to high-value professional knowledge rather than more generic web data. However, the data providers of this knowledge face a skewed tradeoff between income and risk: they receive little share of downstream value yet retain copyright and privacy liability, making them reluctant to contribute their assets to LLM services. Existing techniques do not offer a trustworthy and controllable way to use professional knowledge, because they keep providers in the dark and combine knowledge parameters with the underlying LLM backbone. In this paper, we present PKUS, the Professional Knowledge Utilization System, which treats professional knowledge as a first-class, separable artifact. PKUS keeps the backbone model on GPUs and encodes each provider's contribution as a compact adapter that executes only inside an attested Trusted Execution Environment (TEE). A hardware-rooted lifecycle protocol, adapter pruning, multi-provider aggregation, and split-execution scheduling together make this design practical at serving time. On SST-2, MNLI, and SQuAD with GPT-2 Large and Llama-3.2-1B, PKUS preserves model utility, matching the accuracy and F1 of full fine-tuning and plain LoRA, while achieving the lowest per-request latency with 8.1-11.9x speedup over CPU-only TEE inference and naive CPU-GPU co-execution.
Problem

Research questions and friction points this paper is trying to address.

Enables trustworthy and controllable use of professional knowledge in LLMs.
Separates professional knowledge from the base model for provider control.
Executes knowledge adapters securely within a Trusted Execution Environment.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Separates professional knowledge as compact adapters
Executes adapters inside Trusted Execution Environment
Uses hardware-rooted protocol and split-execution scheduling
🔎 Similar Papers
No similar papers found.
Y
Yifeng Cai
Peking University
Z
Zhida An
Peking University
Y
Yuhan Meng
Peking University
H
Houqian Liu
Peking University
P
Pengli Wang
Peking University
Yao Guo
Yao Guo
Beijing Institute of Technology
Nanodevices
D
Ding Li
Peking University