🤖 AI Summary
Large language models (LLMs) struggle to access user-private or dynamically evolving knowledge—such as personal preferences, organizational policies, community norms, or real-time news. To address this, we propose a knowledge ecosystem for LLMs, introducing the first user-participatory, lightweight modular knowledge architecture. Our approach supports cross-platform knowledge acquisition (e.g., web clippings, Google Docs, GitHub), modular indexing, community-wide sharing, and context-aware automatic injection. Integrated with prompt augmentation and dynamic knowledge fusion techniques, it enables real-time, personalized knowledge infusion into LLM dialogues. The system has been publicly deployed for over 200 users, demonstrating significant improvements in response quality and relevance across personalized recommendation, domain-specific advice, and writing assistance tasks. Our core contribution is the first user-centric, configurable, shareable, and minimally intrusive knowledge co-construction paradigm—enabling collaborative, adaptive, and privacy-respecting knowledge integration with LLMs.
📝 Abstract
Large language models are designed to encode general purpose knowledge about the world from Internet data. Yet, a wealth of information falls outside this scope -- ranging from personal preferences to organizational policies, from community-specific advice to up-to-date news -- that users want models to access but remains unavailable. In this paper, we propose a knowledge ecosystem in which end-users can create, curate, and configure custom knowledge modules that are utilized by language models, such as ChatGPT and Claude. To support this vision, we introduce Knoll, a software infrastructure that allows users to make modules by clipping content from the web or authoring shared documents on Google Docs and GitHub, add modules that others have made, and rely on the system to insert relevant knowledge when interacting with an LLM. We conduct a public deployment of Knoll reaching over 200 users who employed the system for a diverse set of tasks including personalized recommendations, advice-seeking, and writing assistance. In our evaluation, we validate that using Knoll improves the quality of generated responses.