🤖 AI Summary
This paper addresses the computational inefficiency and instability of Generalized Random Forests (GRF) in high-dimensional heterogeneous treatment effect estimation, stemming from its reliance on gradient-based splitting criteria. We propose Fixed-Point GRF (FP-GRF), a gradient-free variant that replaces Jacobian computation with local fixed-point approximations for tree splitting. FP-GRF retains theoretical guarantees—namely, consistency and asymptotic normality—while substantially improving scalability. Empirical evaluations demonstrate that FP-GRF achieves several-fold speedup over standard GRF without sacrificing statistical accuracy, and exhibits superior robustness and computational efficiency on both synthetic and real-world datasets. The key innovation lies in the first integration of fixed-point iteration into nonparametric tree-splitting frameworks, establishing a new paradigm for high-dimensional heterogeneous effect modeling that is efficient, stable, and theoretically rigorous.
📝 Abstract
We propose a computationally efficient alternative to generalized random forests arXiv:1610.01271 (GRFs) for estimating heterogeneous effects in large dimensions. While GRFs rely on a gradient-based splitting criterion, which is large dimensions is computationally expensive and unstable, our method introduces a fixed-point approximation that eliminates the need for Jacobian estimation. This gradient-free approach preserves GRFs theoretical guarantees of consistency and asymptotic normality while significantly improving computational efficiency. We demonstrate that our method achieves multiple times the speed over standard GRFs without compromising statistical accuracy. Experiments on both simulated and real-world data, validate our approach. Our findings suggest that the proposed method is a scalable alternative for localized effect estimation in machine learning and causal inference applications.