Variable Selection in Maximum Mean Discrepancy for Interpretable Distribution Comparison

📅 2023-11-02
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the two-sample variable selection problem—identifying variables that significantly distinguish between two data distributions to uncover underlying mechanistic differences. To overcome the lack of generality and theoretical guarantees in existing methods, we formally define the “discriminative variable set” and prove its uniqueness for the first time. We propose an interpretable, sparse-weight learning framework grounded in the Maximum Mean Discrepancy (MMD), integrating kernel two-sample testing with a data-driven regularization parameter selection strategy to jointly optimize recall and precision. On synthetic benchmarks, our method substantially outperforms mainstream baselines. Furthermore, it demonstrates practical efficacy and interpretability in real-world applications: precisely identifying physically meaningful discriminative variables in water pipe and transportation network datasets. These results validate both the statistical soundness and domain applicability of our approach.
📝 Abstract
We study two-sample variable selection: identifying variables that discriminate between the distributions of two sets of data vectors. Such variables help scientists understand the mechanisms behind dataset discrepancies. Although domain-specific methods exist (e.g., in medical imaging, genetics, and computational social science), a general framework remains underdeveloped. We make two separate contributions. (i) We introduce a mathematical notion of the discriminating set of variables: the largest subset containing no variables whose marginals are identical across the two distributions and independent of the remaining variables. We prove this set is uniquely defined and establish further properties, making it a suitable ground truth for theory and evaluation. (ii) We propose two methods for two-sample variable selection that assign weights to variables and optimise them to maximise the power of a kernel two-sample test while enforcing sparsity to downweight redundant variables. To select the regularisation parameter - unknown in practice, as it controls the number of selected variables - we develop two data-driven procedures to balance recall and precision. Synthetic experiments show improved performance over baselines, and we illustrate the approach on two applications using datasets from water-pipe and traffic networks.
Problem

Research questions and friction points this paper is trying to address.

Identifying variables that discriminate between two data distributions
Developing a general framework for interpretable two-sample variable selection
Proposing sparse variable weighting methods to maximize test power
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defines largest subset with unique marginal distributions
Assigns weights to maximize kernel test power
Develops data-driven procedures for parameter selection
🔎 Similar Papers
No similar papers found.
Kensuke Mitsuzawa
Kensuke Mitsuzawa
PostDoc at Laboratoire Jean Alexandre Dieudonné, Université Côte d'azur
Maximum Mean DiscrepancyKernel LearningApproximated Bayesian Optimization
Motonobu Kanagawa
Motonobu Kanagawa
EURECOM
statisticsmachine learningapplied mathematicssimulationprobabilistic numerics
S
Stefano Bortoli
Microblink, Trg Drage Iblera 10, 10000, Zagreb, Croatia. (Work done while affiliated with Huawei Munich Research Center.)
M
Margherita Grossi
Intelligent Cloud Technologies Laboratory, Huawei Munich Research Center, Riesstraße 25, 80992 München, Germany.
Paolo Papotti
Paolo Papotti
Professor at EURECOM
Data ManagementInformation QualityLLMs