Performance of Confidential Computing GPUs

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Confidential computing introduces significant performance overheads in GPU-accelerated large language model (LLM) inference, yet its quantitative impact—particularly under dynamic model swapping—remains uncharacterized. Method: We conduct the first systematic empirical study quantifying the effects of confidential computing on LLM inference performance on NVIDIA H100 GPUs within a single-VM environment. Our evaluation incorporates relaxed batching, multi-LLM round-robin scheduling, and diverse synthetic traffic patterns to model realistic dynamic loading/unloading scenarios. Contribution/Results: We identify encryption/decryption overhead as the primary bottleneck for model swapping. Compared to non-confidential execution, confidential mode incurs 20–30% higher end-to-end latency, 15–20% lower SLA compliance rate, 45–70% reduced throughput, and ~50% lower GPU utilization. These findings establish a critical empirical benchmark for performance modeling and optimization of confidential AI inference systems.

Technology Category

Application Category

📝 Abstract
This work examines latency, throughput, and other metrics when performing inference on confidential GPUs. We explore different traffic patterns and scheduling strategies using a single Virtual Machine with one NVIDIA H100 GPU, to perform relaxed batch inferences on multiple Large Language Models (LLMs), operating under the constraint of swapping models in and out of memory, which necessitates efficient control. The experiments simulate diverse real-world scenarios by varying parameters such as traffic load, traffic distribution patterns, scheduling strategies, and Service Level Agreement (SLA) requirements. The findings provide insights into the differences between confidential and non-confidential settings when performing inference in scenarios requiring active model swapping. Results indicate that in No-CC mode, relaxed batch inference with model swapping latency is 20-30% lower than in confidential mode. Additionally, SLA attainment is 15-20% higher in No-CC settings. Throughput in No-CC scenarios surpasses that of confidential mode by 45-70%, and GPU utilization is approximately 50% higher in No-CC environments. Overall, performance in the confidential setting is inferior to that in the No-CC scenario, primarily due to the additional encryption and decryption overhead required for loading models onto the GPU in confidential environments.
Problem

Research questions and friction points this paper is trying to address.

Evaluating performance of confidential GPU inference with model swapping
Comparing latency and throughput between confidential and non-confidential GPU modes
Analyzing SLA attainment under varying traffic and scheduling conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses confidential GPU for secure model inference
Explores traffic patterns and scheduling strategies
Compares performance between confidential and non-confidential modes
🔎 Similar Papers
A
Antonio Mart'inez Ibarra
University of Murcia
J
Julian James Stephen
IBM Research
A
Aurora Gonz'alez Vidal
University of Murcia
K. R. Jayaram
K. R. Jayaram
Research Scientist, IBM Research
Distributed SystemsProgramming Languages
A
Antonio Fernando Skarmeta G'omez
University of Murcia