Secure Multi-User Linearly-Separable Distributed Computing

πŸ“… 2026-02-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the privacy-preserving challenge in multi-user linearly separable distributed computing by proposing a secure coding framework based on sparse matrix factorization \( F = DE \). By carefully designing the encoding matrix \( D \) and task assignment matrix \( E \) to satisfy specific rank and subspace conditions, and incorporating shared randomness, the authors establish necessary and sufficient conditions for information-theoretic security. The scheme achieves strict information isolation among users without incurring additional computational or communication overhead: it guarantees perfect secrecy over finite fields and ensures that mutual information over the real field can be made arbitrarily close to zero by increasing the variance of the shared randomness, all while preserving near-optimal parallelization gain.

Technology Category

Application Category

πŸ“ Abstract
The introduction of the new multi-user linearly-separable distributed computing framework, has recently revealed how a parallel treatment of users can yield large parallelization gains with relatively low computation and communication costs. These gains stem from a new approach that converts the computing problem into a sparse matrix factorization problem; a matrix $F$ that describes the users'requests, is decomposed as \(F = DE\), where a \(\gamma\)-sparse \(E\) defines the task allocation across $N$ servers, and a \(\delta\)-sparse \(D\) defines the connectivity between \(N\) servers and \(K\) users as well as the decoding process. While this approach provides near-optimal performance, its linear nature has raised data secrecy concerns. We here adopt an information-theoretic secrecy framework, seeking guarantees that each user can learn nothing more than its own requested function. In this context, our main result provides two necessary and sufficient secrecy criteria; (i) for each user \(k\) who observes $\alpha_k$ server responses, the common randomness visible to that user must span a subspace of dimension exactly $\alpha_k-1$, and (ii) for each user, removing from \(\mathbf{D}\) the columns corresponding to the servers it observes must leave a matrix of rank at least \(K-1\). With these conditions in place, we design a general scheme -- that applies to finite and non-finite fields alike -- which is based on appending to \(\mathbf{E}\) a basis of \(\mathrm{Null}(\mathbf{D})\) and by carefully injecting shared randomness. In many cases, this entails no additional costs. The scheme, while maintaining performance, guarantees perfect information-theoretic secrecy in the case of finite fields, while in the real case, the conditions yield an explicit mutual-information bound that can be made arbitrarily small by increasing the variance of Gaussian common randomness.
Problem

Research questions and friction points this paper is trying to address.

secure distributed computing
multi-user privacy
information-theoretic secrecy
linearly-separable computation
data confidentiality
Innovation

Methods, ideas, or system contributions that make the work stand out.

secure distributed computing
linearly-separable computation
information-theoretic secrecy
sparse matrix factorization
common randomness
πŸ”Ž Similar Papers
No similar papers found.
A
Amir Masoud Jafarpisheh
School of Engineering, University of Edinburgh, Edinburgh, UK
A
Ali Khalesi
Institut Polytechnique des Sciences AvancΓ©es (IPSA), Paris, France
Petros Elia
Petros Elia
Professor, Communication Systems Department, Eurecom
CachingWireless CommunicationsInformation Theory