Generalized Pinsker Inequality for Bregman Divergences of Negative Tsallis Entropies

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the relationship between Bregman divergences induced by negative α-Tsallis entropy and the total variation (L¹) distance, with applications to probabilistic prediction and online learning error analysis. By leveraging tools from convex analysis, information geometry, and inequality optimization, the authors establish—for the first time—a tight Pinsker-type inequality that bounds the α-Tsallis Bregman divergence below by a quadratic function of the total variation distance. Specifically, they prove that for any probability distributions \( p \) and \( q \), \( D_\alpha(p\|q) \geq \frac{C_{\alpha,K}}{2} \|p - q\|_1^2 \), and explicitly derive the optimal constant \( C_{\alpha,K} \) in terms of the entropy parameter \( \alpha \) and the dimension \( K \). This result provides a refined characterization of distributional discrepancies and fills a critical gap in the theoretical understanding of Tsallis-entropy-based Bregman divergences.

Technology Category

Application Category

📝 Abstract
The Pinsker inequality lower bounds the Kullback--Leibler divergence $D_{\textrm{KL}}$ in terms of total variation and provides a canonical way to convert $D_{\textrm{KL}}$ control into $\lVert \cdot \rVert_1$-control. Motivated by applications to probabilistic prediction with Tsallis losses and online learning, we establish a generalized Pinsker inequality for the Bregman divergences $D_\alpha$ generated by the negative $\alpha$-Tsallis entropies -- also known as $\beta$-divergences. Specifically, for any $p$, $q$ in the relative interior of the probability simplex $\Delta^K$, we prove the sharp bound \[ D_\alpha(p\Vert q) \ge \frac{C_{\alpha,K}}{2}\cdot \|p-q\|_1^2, \] and we determine the optimal constant $C_{\alpha,K}$ explicitly for every choice of $(\alpha,K)$.
Problem

Research questions and friction points this paper is trying to address.

Pinsker inequality
Bregman divergence
Tsallis entropy
total variation
information geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized Pinsker inequality
Bregman divergence
Tsallis entropy
β-divergence
total variation
🔎 Similar Papers
No similar papers found.
G
Guglielmo Beretta
DAIS, Università Ca’ Foscari Venezia, Italy; DAUIN, Politecnico di Torino, Italy
T
Tommaso Cesari
School of Electrical Engineering and Computer Science, University of Ottawa, Canada
Roberto Colomboni
Roberto Colomboni
Machine Learning Researcher at POLIMI (Milan) and UNIMI (Milan)
Statistical learning theoryOnline learningMulti-Armed Bandits