🤖 AI Summary
Existing music recommendation systems suffer from limitations in multimodal modeling, long-term user preference representation, and deployment scalability—e.g., large language models incur high computational cost and latency, while retrieval-based methods rely on single-modal representations, ignore user history, and require full-model retraining. To address these issues, we propose JAM: a lightweight natural language–driven music recommendation framework. JAM models user-query-item ternary interactions within a shared latent space via a knowledge graph embedding–inspired vector translation mechanism; employs cross-attention–enhanced sparse Mixture-of-Experts (MoE) networks for dynamic multimodal feature aggregation; and introduces anonymized long-term preference encoding to enable incremental updates and zero-retraining deployment. Evaluated on the newly constructed JAMSessions dataset, JAM achieves significant gains in recommendation accuracy, while offering strong interpretability and plug-and-play adaptability.
📝 Abstract
Natural language interfaces offer a compelling approach for music recommendation, enabling users to express complex preferences conversationally. While Large Language Models (LLMs) show promise in this direction, their scalability in recommender systems is limited by high costs and latency. Retrieval-based approaches using smaller language models mitigate these issues but often rely on single-modal item representations, overlook long-term user preferences, and require full model retraining, posing challenges for real-world deployment. In this paper, we present JAM (Just Ask for Music), a lightweight and intuitive framework for natural language music recommendation. JAM models user-query-item interactions as vector translations in a shared latent space, inspired by knowledge graph embedding methods like TransE. To capture the complexity of music and user intent, JAM aggregates multimodal item features via cross-attention and sparse mixture-of-experts. We also introduce JAMSessions, a new dataset of over 100k user-query-item triples with anonymized user/item embeddings, uniquely combining conversational queries and user long-term preferences. Our results show that JAM provides accurate recommendations, produces intuitive representations suitable for practical use cases, and can be easily integrated with existing music recommendation stacks.