🤖 AI Summary
This study investigates whether brief AI literacy interventions can mitigate high school students’ overreliance on erroneous ChatGPT suggestions. Method: A randomized controlled trial was conducted wherein participants solved mathematical puzzles using ChatGPT responses deliberately engineered to contain a 50% error rate; the intervention group received concise instruction on large language model (LLM) mechanisms and limitations, while the control group received no instruction. Contribution/Results: Contrary to expectations, the intervention did not reduce error adoption—overall error acceptance remained high at 52.1%—and significantly increased the rate of overlooking correct suggestions. This counterintuitive finding challenges the prevailing assumption that enhanced cognitive understanding alone improves generative AI usage behavior. It provides the first empirical evidence that short-term AI literacy initiatives may inadvertently intensify, rather than alleviate, uncritical trust in AI outputs. The results carry critical implications for the design of AI education curricula and the ethics of human-AI collaboration.
📝 Abstract
In this study, we examined whether a short-form AI literacy intervention could reduce the adoption of incorrect recommendations from large language models. High school seniors were randomly assigned to either a control or an intervention group, which received an educational text explaining ChatGPT's working mechanism, limitations, and proper use. Participants solved math puzzles with the help of ChatGPT's recommendations, which were incorrect in half of the cases. Results showed that students adopted incorrect suggestions 52.1% of the time, indicating widespread over-reliance. The educational intervention did not significantly reduce over-reliance. Instead, it led to an increase in ignoring ChatGPT's correct recommendations. We conclude that the usage of ChatGPT is associated with over-reliance and it is not trivial to increase AI literacy to counter over-reliance.