π€ AI Summary
To address the challenges of low calibration accuracy, high latency, and excessive energy consumption in fine-grained time-series monitoring using low-power, low-precision sensors under resource-constrained conditions, this paper proposes TESLAβa lightweight, real-time calibration framework. Its core innovation is the Logarithmic Bucketing Attention (LBA) mechanism, the first of its kind, which preserves nonlinear modeling capability while reducing Transformer complexity to *O(n log n)*, significantly enhancing hardware efficiency. TESLA further integrates temporal feature adaptive encoding and an end-to-end differentiable calibration module. Experiments demonstrate that TESLA outperforms state-of-the-art deep learning models and customized linear methods in calibration accuracy, sub-millisecond response latency, and energy efficiency, while enabling real-time processing of long sequences. This work establishes a new paradigm for high-precision time-series sensing at the edge.
π Abstract
Precise measurements from sensors are crucial, but data is usually collected from low-cost, low-tech systems, which are often inaccurate. Thus, they require further calibrations. To that end, we first identify three requirements for effective calibration under practical low-tech sensor conditions. Based on the requirements, we develop a model called TESLA, Transformer for effective sensor calibration utilizing logarithmic-binned attention. TESLA uses a high-performance deep learning model, Transformers, to calibrate and capture non-linear components. At its core, it employs logarithmic binning to minimize attention complexity. TESLA achieves consistent real-time calibration, even with longer sequences and finer-grained time series in hardware-constrained systems. Experiments show that TESLA outperforms existing novel deep learning and newly crafted linear models in accuracy, calibration speed, and energy efficiency.