🤖 AI Summary
Large language models (LLMs) suffer from low inference efficiency in resource-constrained trusted execution environments (TEEs), particularly under confidential computing settings. Method: This work presents the first systematic performance evaluation of the DeepSeek-R1 series—specifically the 1.5B-parameter variant—on Intel Trust Domain Extensions (TDX), comparing inference across three deployment paradigms: pure CPU, CPU–GPU heterogeneous, and TDX-secured execution. Contribution/Results: We find that the 1.5B model achieves higher inference throughput in TDX than on a baseline pure-CPU configuration; moreover, the CPU–GPU collaborative scheme delivers up to 12× speedup within TDX (with slightly lower acceleration for smaller models). We propose a scalable, CPU–GPU co-designed confidential computing optimization framework tailored for privacy-preserving AI. Our analysis uncovers a novel paradigm for efficient small-parameter-model execution in TEEs, providing both theoretical foundations and practical guidelines for secure, lightweight AI deployment.
📝 Abstract
The increasing adoption of Large Language Models (LLMs) in cloud environments raises critical security concerns, particularly regarding model confidentiality and data privacy. Confidential computing, enabled by Trusted Execution Environments (TEEs), offers a promising solution to mitigate these risks. However, existing TEE implementations, primarily CPU-based, struggle to efficiently support the resource-intensive nature of LLM inference and training. In this work, we present the first evaluation of the DeepSeek model within a TEE-enabled confidential computing environment, specifically utilizing Intel Trust Domain Extensions (TDX). Our study benchmarks DeepSeek's performance across CPU-only, CPU-GPU hybrid, and TEE-based implementations. For smaller parameter sets, such as DeepSeek-R1-1.5B, the TDX implementation outperforms the CPU version in executing computations within a secure environment. It highlights the potential for efficiently deploying LLM models on resource-constrained systems while ensuring security. The overall GPU-to-CPU performance ratio averages 12 across different model sizes, with smaller models exhibiting a lower ratio. Additionally, we provide foundational insights and guidance on optimizing CPU-GPU confidential computing solutions for scalable and secure AI deployments. Our findings contribute to the advancement of privacy-preserving AI, paving the way for efficient and secure LLM inference in confidential computing environments.