🤖 AI Summary
This work addresses the construction of generalized information measures satisfying subadditivity in information theory by introducing a class of $(G,f)$-divergences and their associated information measures, built from non-decreasing functions $G$ and $f$-divergences. By establishing a simplification principle on binary alphabets, the authors systematically develop a $(G,f)$-information framework that reduces the verification of subadditivity—under product distributions and channels—to the binary case. This approach yields easily verifiable sufficient conditions for subadditivity across a range of classical $f$-divergences and enables novel applications, including converse bounds for channel coding at finite blocklengths, performance bounds for binary hypothesis testing, and a generalization of the Shannon–Gallager–Berlekamp sphere-packing bound.
📝 Abstract
We introduce a two-parameter family of discrepancy measures, termed \emph{$(G,f)$-divergences}, obtained by applying a non-decreasing function $G$ to an $f$-divergence $D_f$. Building on Csisz\'ar's formulation of mutual $f$-information, we define a corresponding $(G,f)$-information measure $ I_{G,f}(X;Y)$. A central theme of the paper is subadditivity over product distributions and product channels. We develop reduction principles showing that, for broad classes of $G$, it suffices to verify divergence subadditivity on binary alphabets. Specializing to the functions $G(x)\in\{x,\log(1+x),-\log(1-x)\}$, we derive tractable sufficient conditions on $f$ that guarantee subadditivity, covering many standard $f$-divergences. Finally, we present applications to finite-blocklength converses for channel coding, bounds in binary hypothesis testing, and an extension of the Shannon--Gallager--Berlekamp sphere-packing exponent framework to subadditive $(G,f)$-divergences.