RiverText: A Python Library for Training and Evaluating Incremental Word Embeddings from Text Data Streams

📅 2025-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional static word embeddings struggle to adapt to dynamic language evolution—such as emerging neologisms and hashtags—in social media and other streaming contexts. To address this, this paper introduces the first incremental word embedding training and evaluation framework tailored for streaming text. It pioneers the adaptation of classical word embedding evaluation tasks to data stream environments, supporting multiple incremental learning algorithms—including Skip-gram, CBOW, and Word Context Matrix. Built upon PyTorch, the framework features an efficient, scalable neural network backend enabling low-latency model updates and real-time evaluation. Comprehensive experiments systematically assess algorithmic robustness and convergence under dynamic semantic drift. The open-sourced implementation provides the research community with a standardized benchmark and reproducible toolkit for incremental word representation learning.

Technology Category

Application Category

📝 Abstract
Word embeddings have become essential components in various information retrieval and natural language processing tasks, such as ranking, document classification, and question answering. However, despite their widespread use, traditional word embedding models present a limitation in their static nature, which hampers their ability to adapt to the constantly evolving language patterns that emerge in sources such as social media and the web (e.g., new hashtags or brand names). To overcome this problem, incremental word embedding algorithms are introduced, capable of dynamically updating word representations in response to new language patterns and processing continuous data streams. This paper presents RiverText, a Python library for training and evaluating incremental word embeddings from text data streams. Our tool is a resource for the information retrieval and natural language processing communities that work with word embeddings in streaming scenarios, such as analyzing social media. The library implements different incremental word embedding techniques, such as Skip-gram, Continuous Bag of Words, and Word Context Matrix, in a standardized framework. In addition, it uses PyTorch as its backend for neural network training. We have implemented a module that adapts existing intrinsic static word embedding evaluation tasks for word similarity and word categorization to a streaming setting. Finally, we compare the implemented methods with different hyperparameter settings and discuss the results. Our open-source library is available at https://github.com/dccuchile/rivertext.
Problem

Research questions and friction points this paper is trying to address.

Overcoming static limitations in traditional word embeddings
Adapting to evolving language patterns in data streams
Providing tools for incremental embedding training and evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Python library for incremental word embeddings
Implements Skip-gram, CBOW, Word Context Matrix
Uses PyTorch backend for neural training
🔎 Similar Papers
No similar papers found.