🤖 AI Summary
Homomorphic encryption (HE) suffers from low computational efficiency for linear algebra operations, hindering its deployment in privacy-preserving AI applications.
Method: This paper proposes the first systematic framework for reducing CKKS ciphertext-based linear algebra computations—including matrix-vector and matrix-matrix multiplication—to equivalent floating-point BLAS operations. The framework leverages SIMD encoding, ciphertext-plaintext co-scheduling, and deep integration with optimized BLAS libraries (e.g., OpenBLAS/MKL).
Contribution/Results: It introduces a security-proven reduction mechanism that preserves CKKS semantic security while drastically narrowing the performance gap between homomorphic and plaintext computation. Experiments show that encrypted square matrix multiplication is only 4–12× slower than double-precision floating-point BLAS, outperforming prior HE-based linear algebra implementations by orders of magnitude. This work delivers a practical, secure, and efficient foundational computing substrate for privacy-aware AI.
📝 Abstract
Homomorphic encryption is a cryptographic paradigm allowing to compute on encrypted data, opening a wide range of applications in privacy-preserving data manipulation, notably in AI. Many of those applications require significant linear algebra computations (matrix x vector products, and matrix x matrix products). This central role of linear algebra computations goes far beyond homomorphic algebra and applies to most areas of scientific computing. This high versatility led, over time, to the development of a set of highly optimized routines, specified in 1979 under the name BLAS (basic linear algebra subroutines). Motivated both by the applicative importance of homomorphic linear algebra and the access to highly efficient implementations of cleartext linear algebra able to draw the most out of available hardware, we explore the connections between CKKS-based homomorphic linear algebra and floating-point plaintext linear algebra. The CKKS homomorphic encryption system is the most natural choice in this setting, as it natively handles real numbers and offers a large SIMD parallelism. We provide reductions for matrix-vector products, vector-vector products for moderate-sized to large matrices to their plaintext equivalents. Combined with BLAS, we demonstrate that the efficiency loss between CKKS-based encrypted square matrix multiplication and double-precision floating-point square matrix multiplication is a mere 4-12 factor, depending on the precise situation.