🤖 AI Summary
To address the limitation of conventional third-party annotation in accurately capturing subjective attributes (e.g., sentiment, beliefs), this paper introduces “author annotation”—a novel paradigm wherein text authors annotate their own subjective states *in situ* during content generation. We deploy an end-to-end system across a commercial chatbot serving tens of thousands of users, integrating query-triggered annotation, dynamic question generation, a lightweight author-facing interface, and a recommendation model supporting online learning—enabling a closed-loop “annotate-as-you-train” optimization. To our knowledge, this is the first empirical validation of author annotation in a real-world product recommendation setting: it achieves a 534% lift in click-through rate over an industrial advertising baseline. In sentiment analysis, author annotations attain higher quality, 3.2× faster annotation speed, and 67% lower cost versus third-party alternatives. The academic service platform is open-sourced at academic.echollm.io.
📝 Abstract
The status quo for labeling text is third-party annotation, but there are many cases where information directly from the document's source would be preferable over a third-person proxy, especially for egocentric features like sentiment and belief. We introduce author labeling, an annotation technique where the writer of the document itself annotates the data at the moment of creation. We collaborate with a commercial chatbot with over 10,000 users to deploy an author labeling annotation system for subjective features related to product recommendation. This system identifies task-relevant queries, generates on-the-fly labeling questions, and records authors' answers in real time. We train and deploy an online-learning model architecture for product recommendation that continuously improves from author labeling and find it achieved a 534% increase in click-through rate compared to an industry advertising baseline running concurrently. We then compare the quality and practicality of author labeling to three traditional annotation approaches for sentiment analysis and find author labeling to be higher quality, faster to acquire, and cheaper. These findings reinforce existing literature that annotations, especially for egocentric and subjective beliefs, are significantly higher quality when labeled by the author rather than a third party. To facilitate broader scientific adoption, we release an author labeling service for the research community at academic.echollm.io.