🤖 AI Summary
To address real-time supervised classification over non-stationary data streams, this work departs from conventional incremental decision tree paradigms (e.g., Hoeffding Tree ensembles) and introduces, for the first time, a pretrained tabular foundation model—TabPFN—into dynamic stream settings, proposing a parameter-free in-context learning framework. Our method employs a lightweight online data sketching mechanism coupled with a sliding memory buffer to enable implicit meta-learning for real-time contextual adaptation. Additionally, prompt tuning is integrated to enhance robustness against concept drift. Evaluated across all standard non-stationary stream benchmarks, our approach consistently outperforms Hoeffding Tree ensembles, achieving an average accuracy gain of +3.2% and reducing response latency by 40%, thereby demonstrating significantly faster adaptation. This work establishes a novel paradigm for leveraging large foundation models in streaming structured learning.
📝 Abstract
State-of-the-art data stream mining in supervised classification has traditionally relied on ensembles of incremental decision trees. However, the emergence of large tabular models, i.e., transformers designed for structured numerical data, marks a significant paradigm shift. These models move beyond traditional weight updates, instead employing in-context learning through prompt tuning. By using on-the-fly sketches to summarize unbounded streaming data, one can feed this information into a pre-trained model for efficient processing. This work bridges advancements from both areas, highlighting how transformers' implicit meta-learning abilities, pre-training on drifting natural data, and reliance on context optimization directly address the core challenges of adaptive learning in dynamic environments. Exploring real-time model adaptation, this research demonstrates that TabPFN, coupled with a simple sliding memory strategy, consistently outperforms ensembles of Hoeffding trees across all non-stationary benchmarks. Several promising research directions are outlined in the paper. The authors urge the community to explore these ideas, offering valuable opportunities to advance in-context stream learning.