ProductResearch: Training E-Commerce Deep Research Agents via Multi-Agent Synthetic Trajectory Distillation

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing large language model agents in supporting complex product research tasks within e-commerce settings, where deep interaction and broad contextual understanding are often lacking. To bridge this gap, the authors propose a multi-agent collaborative framework: a user agent infers shopping intent, while a supervisor agent orchestrates research agents to generate high-fidelity, long-horizon tool-use trajectories. These collaborative trajectories are then distilled—through reflection and internalization—into single-role training samples. This approach introduces a novel multi-agent synthetic trajectory distillation mechanism that transforms intricate collaborative processes into scalable training data for a single model. A fine-tuned, compact mixture-of-experts (MoE) model trained on this distilled data significantly outperforms baseline systems in answer comprehensiveness, research depth, and perceived user utility, achieving performance comparable to state-of-the-art closed-source systems.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM)-based agents show promise for e-commerce conversational shopping, yet existing implementations lack the interaction depth and contextual breadth required for complex product research. Meanwhile, the Deep Research paradigm, despite advancing information synthesis in web search, suffers from domain gaps when transferred to e-commerce. We propose ProductResearch, a multi-agent framework that synthesizes high-fidelity, long-horizon tool-use trajectories for training robust e-commerce shopping agents. The framework employs a User Agent to infer nuanced shopping intents from behavioral histories, and a Supervisor Agent that orchestrates iterative collaboration with a Research Agent to generate synthetic trajectories culminating in comprehensive, insightful product research reports. These trajectories are rigorously filtered and distilled through a reflective internalization process that consolidates multi-agent supervisory interactions into coherent single-role training examples, enabling effective fine-tuning of LLM agents for complex shopping inquiries. Extensive experiments show that a compact MoE model fine-tuned on our synthetic data achieves substantial improvements over its base model in response comprehensiveness, research depth, and user-perceived utility, approaching the performance of frontier proprietary deep research systems and establishing multi-agent synthetic trajectory training as an effective and scalable paradigm for enhancing LLM-based shopping assistance.
Problem

Research questions and friction points this paper is trying to address.

e-commerce
deep research
large language models
product research
conversational shopping
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent framework
synthetic trajectory distillation
e-commerce deep research
reflective internalization
MoE fine-tuning
🔎 Similar Papers
No similar papers found.
J
Jiangyuan Wang
Alibaba International Digital Commercial Group
K
Kejun Xiao
Alibaba International Digital Commercial Group
Huaipeng Zhao
Huaipeng Zhao
Alibaba Inc
natural language processingMachine Learning
T
Tao Luo
Alibaba International Digital Commercial Group
X
Xiaoyi Zeng
Alibaba International Digital Commercial Group