Learning Explainable Stock Predictions with Tweets Using Mixture of Experts

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of long-horizon, multi-granularity stock price forecasting using heterogeneous multimodal data—namely historical price series and unstructured textual data (e.g., news and social media)—this paper proposes FTS-Text-MoE, a novel forecasting framework. First, it extracts salient textual summaries and constructs point-wise embeddings, which are jointly fed with time-series inputs. Second, it introduces a Mixture-of-Experts (MoE) Transformer decoder to enhance modeling capacity while reducing computational overhead. Third, it incorporates a multi-resolution prediction head to support flexible, scale-aware trend forecasting. Extensive experiments on real-world financial datasets demonstrate that FTS-Text-MoE significantly improves forecasting accuracy over state-of-the-art baselines. Moreover, it achieves superior risk-adjusted returns—evidenced by higher investment profitability and Sharpe ratio—while maintaining computational efficiency, interpretability, and practical deployability.

Technology Category

Application Category

📝 Abstract
Stock price movements are influenced by many factors, and alongside historical price data, tex-tual information is a key source. Public news and social media offer valuable insights into market sentiment and emerging events. These sources are fast-paced, diverse, and significantly impact future stock trends. Recently, LLMs have enhanced financial analysis, but prompt-based methods still have limitations, such as input length restrictions and difficulties in predicting sequences of varying lengths. Additionally, most models rely on dense computational layers, which are resource-intensive. To address these challenges, we propose the FTS- Text-MoE model, which combines numerical data with key summaries from news and tweets using point embeddings, boosting prediction accuracy through the integration of factual textual data. The model uses a Mixture of Experts (MoE) Transformer decoder to process both data types. By activating only a subset of model parameters, it reduces computational costs. Furthermore, the model features multi-resolution prediction heads, enabling flexible forecasting of financial time series at different scales. Experimental results show that FTS-Text-MoE outperforms baseline methods in terms of investment returns and Sharpe ratio, demonstrating its superior accuracy and ability to predict future market trends.
Problem

Research questions and friction points this paper is trying to address.

Predict stock prices using tweets and news with limited input length
Reduce computational costs in financial prediction models
Forecast financial time series at multiple scales flexibly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines numerical data with textual summaries using embeddings
Uses Mixture of Experts Transformer for efficient processing
Features multi-resolution heads for flexible time series forecasting
🔎 Similar Papers
No similar papers found.