🤖 AI Summary
This work addresses the limitations of the classical Bradley–Terry model, which assumes transitive preferences and thus fails to capture cyclic intransitivities prevalent in real-world competitive networks, leading to biased estimates and miscalibrated uncertainty quantification. To overcome this, we propose a Bayesian intransitive Bradley–Terry model that, for the first time, integrates combinatorial Hodge decomposition into the pairwise comparison framework. This approach explicitly decomposes pairwise preferences into a transitive gradient flow and an intransitive curl flow, with adaptive regularization achieved through a global–local shrinkage prior. The model naturally reduces to the classical Bradley–Terry formulation in the absence of intransitivity and enables uncertainty quantification at the triplet level. Experiments demonstrate that our method significantly outperforms existing Bayesian intransitive approaches in estimation accuracy, uncertainty calibration, and computational efficiency, while effectively identifying cyclic competitive advantages.
📝 Abstract
Pairwise comparison data are widely used to infer latent rankings in areas such as sports, social choice, and machine learning. The Bradley-Terry model provides a foundational probabilistic framework but inherently assumes transitive preferences, explaining all comparisons solely through subject-specific parameters. In many competitive networks, however, cycle-induced effects are intrinsic, and ignoring them can distort both estimation and uncertainty quantification. To address this limitation, we propose a Bayesian extension of the Bradley-Terry model that explicitly separates the transitive and intransitive components. The proposed Bayesian Intransitive Bradley-Terry model embeds combinatorial Hodge theory into a logistic framework, decomposing paired relationships into a gradient flow representing transitive strength and a curl flow capturing cycle-induced structure. We impose global-local shrinkage priors on the curl component, enabling data-adaptive regularization and ensuring a natural reduction to the classical Bradley-Terry model when intransitivity is absent. Posterior inference is performed using an efficient Gibbs sampler, providing scalable computation and full Bayesian uncertainty quantification. Simulation studies demonstrate improved estimation accuracy, well-calibrated uncertainty, and substantial computational advantages over existing Bayesian models for intransitivity. The proposed framework enables uncertainty-aware quantification of intransitivity at both the global and triad levels, while also characterizing cycle-induced competitive advantages among teams.