When an AI Judges Your Work: The Hidden Costs of Algorithmic Assessment

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how workers’ behavior, output quantity, and quality change when they are informed that their work will be evaluated by an AI rather than a human. Through an online experiment integrating large language model–based scoring, human assessment, and statistical analysis, the research provides the first empirical evidence of the subtle yet significant behavioral effects of algorithmic evaluation. The findings reveal that AI evaluation leads to a notable increase in output quantity, accompanied by a decline in per-unit quality. Although workers more frequently employ external tools—such as large language models—under AI evaluation, this behavior does not account for the observed shifts in productivity or quality. These results offer critical insights into the differential impacts of human versus algorithmic assessment on labor behavior.

Technology Category

Application Category

📝 Abstract
We use an online experiment with a real work task to study whether workers change their behavior when they know AI will be used to judge their work instead of humans. We find that individuals produce a higher quantity of output when they are assigned an AI evaluator. However, controlling for quantity, the quality of their output is lower, regardless of whether quality is measured using humans or LLM grades. We also find that workers are more likely to use external tools, including LLMs, when they know AI is used to judge their work instead of humans. However, the increase in external tool use does not appear to explain the differences in quantity or quality across treatments.
Problem

Research questions and friction points this paper is trying to address.

algorithmic assessment
AI evaluation
work behavior
output quality
LLM use
Innovation

Methods, ideas, or system contributions that make the work stand out.

algorithmic assessment
AI evaluation
work behavior
output quality
human-AI interaction
🔎 Similar Papers
No similar papers found.
D
David Almog
Kellogg School of Management, Northwestern University
L
Lucas Lippman
Walmart Connect
Daniel Martin
Daniel Martin
University of California, Santa Barbara
Behavioral EconomicsCognitive EconomicsExperimental EconomicsHumans and AI