🤖 AI Summary
This paper addresses the two-sample variable selection problem—identifying variables that significantly distinguish between two data distributions to uncover underlying mechanistic differences. To overcome the lack of generality and theoretical guarantees in existing methods, we formally define the “discriminative variable set” and prove its uniqueness for the first time. We propose an interpretable, sparse-weight learning framework grounded in the Maximum Mean Discrepancy (MMD), integrating kernel two-sample testing with a data-driven regularization parameter selection strategy to jointly optimize recall and precision. On synthetic benchmarks, our method substantially outperforms mainstream baselines. Furthermore, it demonstrates practical efficacy and interpretability in real-world applications: precisely identifying physically meaningful discriminative variables in water pipe and transportation network datasets. These results validate both the statistical soundness and domain applicability of our approach.
📝 Abstract
We study two-sample variable selection: identifying variables that discriminate between the distributions of two sets of data vectors. Such variables help scientists understand the mechanisms behind dataset discrepancies. Although domain-specific methods exist (e.g., in medical imaging, genetics, and computational social science), a general framework remains underdeveloped. We make two separate contributions. (i) We introduce a mathematical notion of the discriminating set of variables: the largest subset containing no variables whose marginals are identical across the two distributions and independent of the remaining variables. We prove this set is uniquely defined and establish further properties, making it a suitable ground truth for theory and evaluation. (ii) We propose two methods for two-sample variable selection that assign weights to variables and optimise them to maximise the power of a kernel two-sample test while enforcing sparsity to downweight redundant variables. To select the regularisation parameter - unknown in practice, as it controls the number of selected variables - we develop two data-driven procedures to balance recall and precision. Synthetic experiments show improved performance over baselines, and we illustrate the approach on two applications using datasets from water-pipe and traffic networks.