🤖 AI Summary
This study investigates the technological alignment, methodological tensions, and ethical challenges arising from generative AI (genAI) versus traditional machine learning (ML) in social science research. Drawing on 284 survey responses—including a novel randomized experiment contrasting “AI” versus “ML” terminology—and 15 in-depth interviews, we find social scientists exhibit significantly lower trust in genAI than in ML, primarily due to genAI’s opacity and automation bias, whereas ML benefits from greater statistical transparency. Terminological confusion further erodes genAI credibility. Our mixed-methods analysis is the first to empirically demonstrate how conceptual misalignment shapes technology adoption, revealing rapid genAI uptake alongside pronounced ethical concerns—particularly around accountability and interpretability. We propose a five-dimensional governance framework targeting developers, researchers, educators, and policymakers, emphasizing explainability enhancement, transparency mechanisms, and context-sensitive implementation.
📝 Abstract
The integration of artificial intelligence (AI) into social science research practices raises significant technological, methodological, and ethical issues. We present a community-centric study drawing on 284 survey responses and 15 semi-structured interviews with social scientists, describing their familiarity with, perceptions of the usefulness of, and ethical concerns about the use of AI in their field. A crucial innovation in study design is to split our survey sample in half, providing the same questions to each -- but randomizing whether participants were asked about"AI"or"Machine Learning"(ML). We find that the use of AI in research settings has increased significantly among social scientists in step with the widespread popularity of generative AI (genAI). These tools have been used for a range of tasks, from summarizing literature reviews to drafting research papers. Some respondents used these tools out of curiosity but were dissatisfied with the results, while others have now integrated them into their typical workflows. Participants, however, also reported concerns with the use of AI in research contexts. This is a departure from more traditional ML algorithms which they view as statistically grounded. Participants express greater trust in ML, citing its relative transparency compared to black-box genAI systems. Ethical concerns, particularly around automation bias, deskilling, research misconduct, complex interpretability, and representational harm, are raised in relation to genAI. To guide this transition, we offer recommendations for AI developers, researchers, educators, and policymakers focusing on explainability, transparency, ethical safeguards, sustainability, and the integration of lived experiences into AI design and evaluation processes.