๐ค AI Summary
In e-commerce applications, LLM prompt engineering heavily relies on domain experts, incurs high iterative costs, and is prone to subjective bias. Method: This paper proposes the โExamples-as-Promptsโ (EaP) paradigm, replacing natural-language prompts with annotated examples and enabling rapid, dynamic LLM adaptation via unsupervised example selection and few-shot learning. We further introduce EaP_lite, a lightweight variant that eliminates manual prompt design entirely, supporting end-to-end automated prompt generation and efficient iteration. Contribution/Results: Evaluated across four live e-commerce business scenarios, EaP matches or surpasses expert-crafted prompts in performance; EaP_lite achieves a 70% inference speedup; A/B testing confirms a 0.06% uplift in composite revenue. To our knowledge, this is the first systematic application of a purely example-driven mechanism for LLM adaptation in e-commerce, establishing a new paradigm for low-barrier, high-robustness large-model deployment.
๐ Abstract
Prompting LLMs offers an efficient way to guide output generation without explicit model training. In the e-commerce domain, prompting-based applications are widely used for tasks such as query understanding, recommender systems, and customer support. However, adapting LLMs to different tasks often requires extensive prompt engineering by domain experts, along with frequent updates to align with evolving business needs. Additionally, crafting fully unbiased natural language prompts remains a challenge for humans. To address these challenges, we propose a novel framework, Examples as the Prompt (EaP) which leverages labeled data to enhance prompts. Specifically, EaP automatically selects the most representative examples to maximize the few-shot capability of LLMs. It is efficient due to its unsupervised example selection and adaptive to potential data distribution shifts. We validate EaP on four real-world production use cases, demonstrating that it achieves comparable or even superior performance comparing to hand-crafted prompts designed by domain experts. Additionally, we introduce EaP_lite, which entirely replaces the natural language components of prompts with labeled examples. EaP_lite improves LLM inference speed by up to 70% without compromising performance. Latest online A/B test shows that using EaP and EaP_lite for data labeling can bring significant composite revenue gain by 0.06%.