Social Scientists on the Role of AI in Research

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the technological alignment, methodological tensions, and ethical challenges arising from generative AI (genAI) versus traditional machine learning (ML) in social science research. Drawing on 284 survey responses—including a novel randomized experiment contrasting “AI” versus “ML” terminology—and 15 in-depth interviews, we find social scientists exhibit significantly lower trust in genAI than in ML, primarily due to genAI’s opacity and automation bias, whereas ML benefits from greater statistical transparency. Terminological confusion further erodes genAI credibility. Our mixed-methods analysis is the first to empirically demonstrate how conceptual misalignment shapes technology adoption, revealing rapid genAI uptake alongside pronounced ethical concerns—particularly around accountability and interpretability. We propose a five-dimensional governance framework targeting developers, researchers, educators, and policymakers, emphasizing explainability enhancement, transparency mechanisms, and context-sensitive implementation.

Technology Category

Application Category

📝 Abstract
The integration of artificial intelligence (AI) into social science research practices raises significant technological, methodological, and ethical issues. We present a community-centric study drawing on 284 survey responses and 15 semi-structured interviews with social scientists, describing their familiarity with, perceptions of the usefulness of, and ethical concerns about the use of AI in their field. A crucial innovation in study design is to split our survey sample in half, providing the same questions to each -- but randomizing whether participants were asked about"AI"or"Machine Learning"(ML). We find that the use of AI in research settings has increased significantly among social scientists in step with the widespread popularity of generative AI (genAI). These tools have been used for a range of tasks, from summarizing literature reviews to drafting research papers. Some respondents used these tools out of curiosity but were dissatisfied with the results, while others have now integrated them into their typical workflows. Participants, however, also reported concerns with the use of AI in research contexts. This is a departure from more traditional ML algorithms which they view as statistically grounded. Participants express greater trust in ML, citing its relative transparency compared to black-box genAI systems. Ethical concerns, particularly around automation bias, deskilling, research misconduct, complex interpretability, and representational harm, are raised in relation to genAI. To guide this transition, we offer recommendations for AI developers, researchers, educators, and policymakers focusing on explainability, transparency, ethical safeguards, sustainability, and the integration of lived experiences into AI design and evaluation processes.
Problem

Research questions and friction points this paper is trying to address.

Examining social scientists' familiarity and ethical concerns about AI in research
Comparing perceptions of AI versus Machine Learning in research practices
Addressing ethical issues like automation bias and deskilling in genAI use
Innovation

Methods, ideas, or system contributions that make the work stand out.

Split survey sample to compare AI and ML perceptions
Analyzed AI usage trends among social scientists
Proposed ethical guidelines for AI in research
🔎 Similar Papers
No similar papers found.
Tatiana Chakravorti
Tatiana Chakravorti
PhD in Informatics, Pennsylvania State University
Artificial IntelligenceHuman-centered AIScience of ScienceAI Ethics
X
Xinyu Wang
Pennsylvania State University, USA
P
Pranav Narayanan Venkit
Pennsylvania State University, USA
S
S. Koneru
Pennsylvania State University, USA
Kevin Munger
Kevin Munger
European University Institute
S
Sarah Rajtmajer
Pennsylvania State University, USA