Learning the Optimal Stopping for Early Classification within Finite Horizons via Sequential Probability Ratio Test

📅 2025-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For early classification of finite-length time series, conventional sequential probability ratio test (SPRT)-based approaches require computationally expensive backward induction to determine the optimal stopping time—rendering them impractical under small-sample or short-horizon regimes. This paper proposes FIRMBOUND, the first framework to efficiently estimate the backward induction solution directly from data. It jointly learns density ratios and convex functions to ensure statistical consistency, and incorporates a Gaussian process–accelerated variant to enhance generalization and computational efficiency. The method integrates SPRT principles, conditional expectation modeling, convex optimization, and nonparametric regression. Evaluated on diverse multiclass i.i.d. and non-i.i.d. synthetic and real-world datasets, FIRMBOUND significantly reduces Bayes risk and decision-time variance, closely approaching the theoretical optimal speed–accuracy trade-off boundary.

Technology Category

Application Category

📝 Abstract
Time-sensitive machine learning benefits from Sequential Probability Ratio Test (SPRT), which provides an optimal stopping time for early classification of time series. However, in finite horizon scenarios, where input lengths are finite, determining the optimal stopping rule becomes computationally intensive due to the need for backward induction, limiting practical applicability. We thus introduce FIRMBOUND, an SPRT-based framework that efficiently estimates the solution to backward induction from training data, bridging the gap between optimal stopping theory and real-world deployment. It employs density ratio estimation and convex function learning to provide statistically consistent estimators for sufficient statistic and conditional expectation, both essential for solving backward induction; consequently, FIRMBOUND minimizes Bayes risk to reach optimality. Additionally, we present a faster alternative using Gaussian process regression, which significantly reduces training time while retaining low deployment overhead, albeit with potential compromise in statistical consistency. Experiments across independent and identically distributed (i.i.d.), non-i.i.d., binary, multiclass, synthetic, and real-world datasets show that FIRMBOUND achieves optimalities in the sense of Bayes risk and speed-accuracy tradeoff. Furthermore, it advances the tradeoff boundary toward optimality when possible and reduces decision-time variance, ensuring reliable decision-making. Code is publicly available at https://github.com/Akinori-F-Ebihara/FIRMBOUND
Problem

Research questions and friction points this paper is trying to address.

Machine Learning
Sequential Probability Ratio Test (SPRT)
Computational Complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

FIRMBOUND
Gaussian Process Regression
Optimal Stopping Time
🔎 Similar Papers
No similar papers found.