🤖 AI Summary
To address out-of-distribution (OOD) generalization in regression tasks, this paper proposes the first random forest method grounded in the Maximum Risk Minimization (MaxRM) principle. The method integrates MaxRM into the random forest framework—its core innovation—and unifies modeling of three risk types: mean squared error (MSE), negative reward, and regret. Notably, it establishes the first generalization error upper bound for regret risk under unseen test distributions. Theoretically, the algorithm is proven to be statistically consistent and computationally efficient. Empirical evaluations on both synthetic and real-world datasets demonstrate that the proposed method significantly outperforms standard random forests and state-of-the-art OOD regression approaches, particularly exhibiting enhanced robustness under distributional shifts.
📝 Abstract
We consider a regression setting where observations are collected in different environments modeled by different data distributions. The field of out-of-distribution (OOD) generalization aims to design methods that generalize better to test environments whose distributions differ from those observed during training. One line of such works has proposed to minimize the maximum risk across environments, a principle that we refer to as MaxRM (Maximum Risk Minimization). In this work, we introduce variants of random forests based on the principle of MaxRM. We provide computationally efficient algorithms and prove statistical consistency for our primary method. Our proposed method can be used with each of the following three risks: the mean squared error, the negative reward (which relates to the explained variance), and the regret (which quantifies the excess risk relative to the best predictor). For MaxRM with regret as the risk, we prove a novel out-of-sample guarantee over unseen test distributions. Finally, we evaluate the proposed methods on both simulated and real-world data.