🤖 AI Summary
This study investigates how large language models (LLMs) reshape scientific knowledge production, focusing on researchers’ adaptive mechanisms and their implications for collaboration patterns, epistemic norms, and research infrastructure. Employing an original “insider–outsider” dual-track evaluation framework, the research integrates mixed methods—including empirical surveys, workflow analysis, collaborative behavior coding, and knowledge production framework modeling—to bridge perspectives of domain researchers (insiders) and AI system designers (outsiders). Findings reveal emergent human–AI co-production modalities, novel challenges in knowledge validation, and critical infrastructural gaps. Crucially, the study provides the first systematic evidence that LLMs exert a dual effect: accelerating scientific innovation while simultaneously triggering organizational restructuring across research institutions. The results offer empirically grounded design principles for AI-augmented research systems and extend CSCW scholarship into the era of generative AI.
📝 Abstract
CSCW has long examined how emerging technologies reshape the ways researchers collaborate and produce knowledge, with scientific knowledge production as a central area of focus. As AI becomes increasingly integrated into scientific research, understanding how researchers adapt to it reveals timely opportunities for CSCW research -- particularly in supporting new forms of collaboration, knowledge practices, and infrastructure in AI-driven science. This study quantifies LLM impacts on scientific knowledge production based on an evaluation workflow that combines an insider-outsider perspective with a knowledge production framework. Our findings reveal how LLMs catalyze both innovation and reorganization in scientific communities, offering insights into the broader transformation of knowledge production in the age of generative AI and sheds light on new research opportunities in CSCW.