🤖 AI Summary
This paper investigates the computation capacity—defined as the minimum expected number of bits transmitted per edge required for zero-error multiple-computation of a source function at a single sink node—in directed acyclic graphs (DAGs). We derive a general lower bound on computation capacity based on clique entropy, which, for the first time, reveals the fundamental graph-theoretic nature of key parameters in fixed-length coding. By rigorously refining the source distribution, we tighten this bound. Further, leveraging the substitution lemma, characteristic graph induction, and comparative analysis between uniquely decodable and fixed-length codes, we prove that uniquely decodable codes strictly outperform fixed-length codes in terms of computation capacity. These results yield tighter information-theoretic bounds and provide a rigorous theoretical foundation for coding design in network function computation.
📝 Abstract
We consider uniquely-decodable coding for zero-error network function computation, where in a directed acyclic graph, the single sink node is required to compute with zero error a target function multiple times, whose arguments are the information sources generated at a set of source nodes. We are interested in the computing capacity from the information theoretic point of view, which is defined as the infimum of the maximum expected number of bits transmitted on all the edges for computing the target function once on average. We first prove some new results on clique entropy, in particular, the substitution lemma of clique entropy for probabilistic graphs with a certain condition. With them, we prove a lower bound on the computing capacity associated with clique entropies of the induced characteristic graphs, where the obtained lower bound is applicable to arbitrary network topologies, arbitrary information sources, and arbitrary target functions. By refining the probability distribution of information sources, we further strictly improve the obtained lower bound. In addition, we compare uniquely-decodable network function-computing coding and fixed-length network function-computing coding, and show that the former indeed outperforms the latter in terms of the computing capacity. Therein, we provide a novel graph-theoretic explanation of the key parameter in the best known bound on the computing capacity for fixed-length network function-computing codes, which would be helpful to improve the existing results.