Structured Codes for Distributed Matrix Multiplication

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of secure and efficient matrix multiplication over finite fields for strongly correlated matrices A and B in a distributed master-workers-receiver architecture. Methodologically, it introduces the first tight rate bound for two-node distributed bilinear function computation; establishes a novel paradigm integrating Körner–Marton cooperative coding with nonlinear source transformation, surpassing the compression gain limit of Slepian–Wolf coding; and designs structured polynomial coding coupled with a Han–Kobayashi-type converse to achieve information-theoretic security and support chained multiplication. Theoretically, it proves optimality of the sum-rate for arbitrary matrix dimensions and strongly correlated sources. Experimentally, the scheme significantly outperforms state-of-the-art approaches—particularly under memory-constrained worker settings—demonstrating superior communication efficiency and computational scalability.

Technology Category

Application Category

📝 Abstract
Our work addresses the well-known open problem of distributed computing of bilinear functions of two correlated sources ${f A}$ and ${f B}$. In a setting with two nodes, with the first node having access to ${f A}$ and the second to ${f B}$, we establish bounds on the optimal sum-rate that allows a receiver to compute an important class of non-linear functions, and in particular bilinear functions, including dot products $langle {f A},{f B} angle$, and general matrix products ${f A}^{intercal}{f B}$ over finite fields. The bounds are tight, for large field sizes, for which case we can derive the exact fundamental performance limits for all problem dimensions and a large class of sources. Our achievability scheme involves the design of non-linear transformations of ${f A}$ and ${f B}$, which are carefully calibrated to work synergistically with the structured linear encoding scheme by K""orner and Marton. The subsequent converse derived here, calibrates the Han-Kobayashi approach to yield a relatively tight converse on the sum rate. We also demonstrate unbounded compression gains over Slepian-Wolf coding, depending on the source correlations. In the end, our work derives fundamental limits for distributed computing of a crucial class of functions, succinctly capturing the computation structures and source correlations. Our findings are subsequently applied to the practical master-workers-receiver framework, where each of $N$ distributed workers has a bounded memory reflecting a bounded computational capability. By combining the above scheme with the polynomial code framework, we design novel structured polynomial codes for distributed matrix multiplication, and show that our codes can surpass the performance of the existing state of art, while also adapting these new codes to support chain matrix multiplications and information-theoretically secure computations.
Problem

Research questions and friction points this paper is trying to address.

Distributed Computing
Matrix Multiplication
Efficiency Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed Computing
Matrix Multiplication
Polynomial Coding
🔎 Similar Papers
No similar papers found.