🤖 AI Summary
This paper addresses the limitation of the classical entropy power inequality (EPI), which applies only to independent random variables, by establishing the first quantitative EPI for dependent random vectors. Methodologically, it integrates information-theoretic analysis, the relationship between Fisher information and entropy, log-supermodularity theory, coupling techniques, and probabilistic inequalities. The key contributions are twofold: (1) a quantitative EPI with explicit error bounds and constants, removing the independence assumption; and (2) the discovery and rigorous proof that the conditional-entropy-based EPI holds whenever the joint density is log-supermodular—i.e., exhibits positive dependence structure. This extends the EPI’s applicability to a broad class of strongly dependent settings, providing a novel theoretical foundation for high-dimensional statistical inference and distributed source coding.
📝 Abstract
The entropy power inequality for independent random vectors is a foundational result of information theory, with deep connections to probability and geometric functional analysis. Several extensions of the entropy power inequality have been developed for settings with dependence, including by Takano, Johnson, and Rioul. We extend these works by developing a quantitative version of the entropy power inequality for dependent random vectors. A notable consequence is that an entropy power inequality stated using conditional entropies holds for random vectors whose joint density is log-supermodular.