🤖 AI Summary
This paper addresses the problem of estimating the time-varying conditional distribution of locally stationary functional time series (LSFTS) under time-varying covariates. We propose a Wasserstein-distance-based convergence analysis framework for Nadaraya–Watson (NW) kernel estimation. Our main contribution is the first non-asymptotic convergence rate for the NW estimator in LSFTS under the Wasserstein metric—departing from conventional L² or pointwise error paradigms. By integrating small-ball probability bounds, α-mixing dependence modeling, and functional analysis on semi-metric spaces, we derive an explicit, constant-inclusive convergence rate. The theoretical results hold for general semi-metric covariate spaces. Extensive simulations and real functional data experiments demonstrate the method’s finite-sample robustness and superior performance over competing approaches.
📝 Abstract
Functional time series (FTS) extend traditional methodologies to accommodate data observed as functions/curves. A significant challenge in FTS consists of accurately capturing the time-dependence structure, especially with the presence of time-varying covariates. When analyzing time series with time-varying statistical properties, locally stationary time series (LSTS) provide a robust framework that allows smooth changes in mean and variance over time. This work investigates Nadaraya-Watson (NW) estimation procedure for the conditional distribution of locally stationary functional time series (LSFTS), where the covariates reside in a semi-metric space endowed with a semi-metric. Under small ball probability and mixing condition, we establish convergence rates of NW estimator for LSFTS with respect to Wasserstein distance. The finite-sample performances of the model and the estimation method are illustrated through extensive numerical experiments both on functional simulated and real data.