SLIP-SEC: Formalizing Secure Protocols for Model IP Protection

πŸ“… 2025-10-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address model theft and intellectual property (IP) leakage risks when deploying large language models (LLMs) on partially trusted or untrusted devices, this paper proposes SLIPβ€”the first formal framework enabling information-theoretically secure hybrid inference. Methodologically, SLIP decomposes model weight matrices additively, integrating random masking with probabilistic verification to enable collaborative inference across trusted and untrusted environments. We formally prove that the protocol achieves strict security under both honest-but-curious and malicious adversary models, while preserving computational efficiency. Key contributions include: (1) pioneering the application of information-theoretic security to LLM inference protection; (2) establishing a provably secure model decomposition scheme with well-defined execution semantics; and (3) designing a lightweight protocol that simultaneously ensures practical deployability and strong security guarantees. Experimental evaluation confirms SLIP’s efficacy in mitigating IP leakage without significant overhead.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) represent valuable intellectual property (IP), reflecting significant investments in training data, compute, and expertise. Deploying these models on partially trusted or insecure devices introduces substantial risk of model theft, making it essential to design inference protocols with provable security guarantees. We present the formal framework and security foundations of SLIP, a hybrid inference protocol that splits model computation between a trusted and an untrusted resource. We define and analyze the key notions of model decomposition and hybrid inference protocols, and introduce formal properties including safety, correctness, efficiency, and t-soundness. We construct secure inference protocols based on additive decompositions of weight matrices, combined with masking and probabilistic verification techniques. We prove that these protocols achieve information-theoretic security against honest-but-curious adversaries, and provide robustness against malicious adversaries with negligible soundness error. This paper focuses on the theoretical underpinnings of SLIP: precise definitions, formal protocols, and proofs of security. Empirical validation and decomposition heuristics appear in the companion SLIP paper. Together, the two works provide a complete account of securing LLM IP via hybrid inference, bridging both practice and theory.
Problem

Research questions and friction points this paper is trying to address.

Formalizes secure inference protocols to protect LLM intellectual property from theft
Splits model computation between trusted and untrusted resources with security guarantees
Achieves information-theoretic security against honest-but-curious and malicious adversaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid inference protocol splitting computation between resources
Additive decomposition of weight matrices with masking techniques
Information-theoretic security against honest-but-curious adversaries
πŸ”Ž Similar Papers
No similar papers found.