🤖 AI Summary
To address the tension between effective fusion of heterogeneous modalities (e.g., text, images, behavioral logs) and stringent low-latency requirements in multimodal CTR prediction, this paper proposes an efficient, lightweight multimodal fusion framework. It introduces an adaptive sparse target attention mechanism to precisely capture salient multimodal features within user behavior sequences; pioneers the integration of quadratic neural networks (QNNs) into CTR modeling to explicitly capture high-order cross-modal interactions; and employs a multimodal embedding fusion strategy that balances representational capacity with computational efficiency. Evaluated on Task 2 of the WWW 2025 EReL@MIR Challenge, the method achieves an AUC of 0.9798, ranking second—demonstrating substantial gains in both predictive accuracy and inference efficiency.
📝 Abstract
Multimodal click-through rate (CTR) prediction is a key technique in industrial recommender systems. It leverages heterogeneous modalities such as text, images, and behavioral logs to capture high-order feature interactions between users and items, thereby enhancing the system's understanding of user interests and its ability to predict click behavior. The primary challenge in this field lies in effectively utilizing the rich semantic information from multiple modalities while satisfying the low-latency requirements of online inference in real-world applications. To foster progress in this area, the Multimodal CTR Prediction Challenge Track of the WWW 2025 EReL@MIR Workshop formulates the problem into two tasks: (1) Task 1 of Multimodal Item Embedding: this task aims to explore multimodal information extraction and item representation learning methods that enhance recommendation tasks; and (2) Task 2 of Multimodal CTR Prediction: this task aims to explore what multimodal recommendation model can effectively leverage multimodal embedding features and achieve better performance. In this paper, we propose a novel model for Task 2, named Quadratic Interest Network (QIN) for Multimodal CTR Prediction. Specifically, QIN employs adaptive sparse target attention to extract multimodal user behavior features, and leverages Quadratic Neural Networks to capture high-order feature interactions. As a result, QIN achieved an AUC of 0.9798 on the leaderboard and ranked second in the competition. The model code, training logs, hyperparameter configurations, and checkpoints are available at https://github.com/salmon1802/QIN.