🤖 AI Summary
Edge environments face significant challenges in processing multi-source, heterogeneous IoT data streams and enabling real-time closed-loop AI—particularly reinforcement learning (RL)—due to protocol incompatibility, disparate sampling rates, data loss, and high latency. Method: This paper proposes a lightweight, edge-AI–oriented data stream processing architecture that tightly integrates online reward computation, model retraining data caching, and real-time feature engineering. It supports multi-source synchronization, adaptive normalization, protocol translation, and fault-tolerant recovery, all within an edge computing framework to enable tightly coupled inference-and-learning loops. Contribution/Results: Evaluated on resource-constrained edge devices, the architecture achieves sub-100-ms end-to-end latency, significantly improves data robustness, and enhances the stability of AI model self-adaptation and optimization. It provides a deployable foundational infrastructure for dynamic edge intelligence.
📝 Abstract
The rise of real-time data and the proliferation of Internet of Things (IoT) devices have highlighted the limitations of cloud-centric solutions, particularly regarding latency, bandwidth, and privacy. These challenges have driven the growth of Edge Computing. Associated with IoT appears a set of other problems, like: data rate harmonization between multiple sources, protocol conversion, handling the loss of data and the integration with Artificial Intelligence (AI) models. This paper presents Percepta, a lightweight Data Stream Processing (DSP) system tailored to support AI workloads at the edge, with a particular focus on such as Reinforcement Learning (RL). It introduces specialized features such as reward function computation, data storage for model retraining, and real-time data preparation to support continuous decision-making. Additional functionalities include data normalization, harmonization across heterogeneous protocols and sampling rates, and robust handling of missing or incomplete data, making it well suited for the challenges of edge-based AI deployment.