🤖 AI Summary
This work proposes a context-mediated domain adaptation approach that treats expert edits of AI-generated content not merely as endpoint corrections but as implicit carriers of domain knowledge. By inversely analyzing user editing behaviors on multi-agent generated outputs, the method dynamically distills implicit norms to guide large language model–driven reasoning. It establishes a bidirectional semantic link between generated content and system inference, enabling norm bootstrapping, knowledge transfer, and in-context learning—all grounded in edit patterns as a novel form of implicit knowledge representation. Integrated into the web framework Seedentia, the approach successfully extracted 46 domain-specific rules from expert revisions, demonstrating the efficacy of this paradigm in capturing and leveraging tacit expertise for adaptive AI systems.
📝 Abstract
Domain experts possess tacit knowledge that they cannot easily articulate through explicit specifications. When experts modify AI-generated artifacts by correcting terminology, restructuring arguments, and adjusting emphasis, these edits reveal domain understanding that remains latent in traditional prompt-based interactions. Current systems treat such modifications as endpoint corrections rather than as implicit specifications that could reshape subsequent reasoning. We propose context-mediated domain adaptation, a paradigm where user modifications to system-generated artifacts serve as implicit domain specification that reshapes LLM-powered multi-agent reasoning behavior. Through our system Seedentia, a web-based multi-agent framework for sense-making, we demonstrate bidirectional semantic links between generated artifacts and system reasoning. Our approach enables specification bootstrapping where vague initial prompts evolve into precise domain specifications through iterative human-AI collaboration, implicit knowledge transfer through reverse-engineered user edits, and in-context learning where agent behavior adapts based on observed correction patterns. We present results from an evaluation with domain experts who generated and modified research questions from academic papers. Our system extracted 46 domain knowledge entries from user modifications, demonstrating the feasibility of capturing implicit expertise through edit patterns, though the limited sample size constrains conclusions about systematic quality improvements.