๐ค AI Summary
This study addresses the tendency of climate-concerned individuals to underestimate the real-world impact of high-impact mitigation behaviors due to cognitive biases, which impedes effective climate action. The authors propose and empirically validate a domain-informed, personalized large language model (LLM) dialogue system that integrates climate science knowledge with tailored prompt engineering to correct user misconceptions through adaptive interaction. Experimental results demonstrate that, compared to web search, generic LLM use, or no intervention, only the personalized climate-specific LLM significantly improved participantsโ accuracy in recognizing high-impact climate actions and increased their willingness to adopt such behaviors. This work provides the first empirical evidence of the unique efficacy of domain-customized, personalized LLMs in enhancing intentions toward impactful climate action.
๐ Abstract
Mitigating climate change requires behaviour change. However, even climate-concerned individuals often hold misperceptions about which actions most reduce carbon emissions. We recruited 1201 climate-concerned individuals to examine whether discussing climate actions with a large language model (LLM) equipped with climate knowledge and prompted to provide personalised responses would foster more accurate perceptions of the impacts of climate actions and increase willingness to adopt feasible, high-impact behaviours. We compared this to having participants run a web search, have a conversation with an unspecialised LLM, and no intervention. The personalised climate LLM was the only condition that led to increased knowledge about the impacts of climate actions and greater intentions to adopt impactful behaviours. While the personalised climate LLM did not outperform a web search in improving understanding of climate action impacts, the ability of LLMs to deliver personalised, actionable guidance may make them more effective at motivating impactful pro-climate behaviour change.