Taxonomy-Guided Zero-Shot Recommendations with LLMs

📅 2024-06-20
🏛️ International Conference on Computational Linguistics
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face key bottlenecks in recommendation—limited prompt length, unstructured item representations, and uncontrollable generation—hindering effective zero-shot personalization. Method: We propose the first zero-shot recommendation framework leveraging an explicit hierarchical taxonomy dictionary. Our approach embeds structured taxonomy knowledge directly into prompts, enabling simultaneous item categorization and controllable semantic feature generation via a two-stage paradigm (categorization → generation), without domain-specific fine-tuning. Contribution/Results: Through structured prompt engineering and zero-shot LLM inference, our method significantly outperforms existing zero-shot baselines across multiple public benchmarks, improving both recommendation accuracy and contextual relevance. The framework is fully interpretable, scalable, and requires no parameter updates or task-specific adaptation. Code is publicly available.

Technology Category

Application Category

📝 Abstract
With the emergence of large language models (LLMs) and their ability to perform a variety of tasks, their application in recommender systems (RecSys) has shown promise. However, we are facing significant challenges when deploying LLMs into RecSys, such as limited prompt length, unstructured item information, and un-constrained generation of recommendations, leading to sub-optimal performance. To address these issues, we propose a novel method using a taxonomy dictionary. This method provides a systematic framework for categorizing and organizing items, improving the clarity and structure of item information. By incorporating the taxonomy dictionary into LLM prompts, we achieve efficient token utilization and controlled feature generation, leading to more accurate and contextually relevant recommendations. Our Taxonomy-guided Recommendation (TaxRec) approach features a two-step process: one-time taxonomy categorization and LLM-based recommendation, enabling zero-shot recommendations without the need for domain-specific fine-tuning. Experimental results demonstrate TaxRec significantly enhances recommendation quality compared to traditional zero-shot approaches, showcasing its efficacy as personal recommender with LLMs. Code is available at https://github.com/yueqingliang1/TaxRec.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM-based recommender systems
Addresses unstructured item information issues
Improves zero-shot recommendation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Taxonomy dictionary enhances item categorization
LLM prompts optimized for token efficiency
Zero-shot recommendations without domain fine-tuning
🔎 Similar Papers
No similar papers found.