🤖 AI Summary
Taxonomy construction for classifying unstructured text (e.g., personal goal statements) is time-intensive, prone to researcher bias, and suffers from poor reproducibility. Method: This paper proposes a human–AI collaborative, iterative text analysis paradigm that integrates top-down and bottom-up strategies to enable dynamic taxonomy generation, evaluation, refinement, and validation. Leveraging prompt engineering, it facilitates multi-turn collaboration between domain researchers and large language models (LLMs), with human feedback driving iterative taxonomy optimization. Intercoder reliability is quantified using Cohen’s κ within a structured coding framework. Results: Empirical evaluation in a life-domain dataset achieves κ > 0.85, significantly enhancing analytical efficiency, reliability, and reproducibility. This work pioneers deep integration of LLMs into the qualitative analysis closed loop, offering a novel methodology for low-bias, high-fidelity open-text classification.
📝 Abstract
Analyzing texts such as open-ended responses, headlines, or social media posts is a time- and labor-intensive process highly susceptible to bias. LLMs are promising tools for text analysis, using either a predefined (top-down) or a data-driven (bottom-up) taxonomy, without sacrificing quality. Here we present a step-by-step tutorial to efficiently develop, test, and apply taxonomies for analyzing unstructured data through an iterative and collaborative process between researchers and LLMs. Using personal goals provided by participants as an example, we demonstrate how to write prompts to review datasets and generate a taxonomy of life domains, evaluate and refine the taxonomy through prompt and direct modifications, test the taxonomy and assess intercoder agreements, and apply the taxonomy to categorize an entire dataset with high intercoder reliability. We discuss the possibilities and limitations of using LLMs for text analysis.