π€ AI Summary
To address model theft and intellectual property (IP) leakage risks when deploying large language models (LLMs) on partially trusted or untrusted devices, this paper proposes SLIPβthe first formal framework enabling information-theoretically secure hybrid inference. Methodologically, SLIP decomposes model weight matrices additively, integrating random masking with probabilistic verification to enable collaborative inference across trusted and untrusted environments. We formally prove that the protocol achieves strict security under both honest-but-curious and malicious adversary models, while preserving computational efficiency. Key contributions include: (1) pioneering the application of information-theoretic security to LLM inference protection; (2) establishing a provably secure model decomposition scheme with well-defined execution semantics; and (3) designing a lightweight protocol that simultaneously ensures practical deployability and strong security guarantees. Experimental evaluation confirms SLIPβs efficacy in mitigating IP leakage without significant overhead.
π Abstract
Large Language Models (LLMs) represent valuable intellectual property (IP), reflecting significant investments in training data, compute, and expertise. Deploying these models on partially trusted or insecure devices introduces substantial risk of model theft, making it essential to design inference protocols with provable security guarantees.
We present the formal framework and security foundations of SLIP, a hybrid inference protocol that splits model computation between a trusted and an untrusted resource. We define and analyze the key notions of model decomposition and hybrid inference protocols, and introduce formal properties including safety, correctness, efficiency, and t-soundness. We construct secure inference protocols based on additive decompositions of weight matrices, combined with masking and probabilistic verification techniques. We prove that these protocols achieve information-theoretic security against honest-but-curious adversaries, and provide robustness against malicious adversaries with negligible soundness error.
This paper focuses on the theoretical underpinnings of SLIP: precise definitions, formal protocols, and proofs of security. Empirical validation and decomposition heuristics appear in the companion SLIP paper. Together, the two works provide a complete account of securing LLM IP via hybrid inference, bridging both practice and theory.