🤖 AI Summary
This study investigates how divergent perceptions of AI risks, benefits, and values between the general public and AI experts shape societal acceptance of AI. Method: Drawing on four-dimensional evaluations—perceived likelihood, risk, benefit, and affective response—across 71 cross-domain AI applications (healthcare, climate, employment, arts, military) from 1,110 lay participants and 119 AI experts, we employ statistical modeling and spatial mapping visualization. Contribution/Results: We identify a systematic risk-weighting bias: the public weights risk at half the level of benefit, whereas experts weight it at only one-third—revealing a significant expert–public asymmetry. Key divergence domains include sustainability and military AI, where experts exhibit greater optimism, overestimate capabilities, and underestimate risks. The study constructs the first empirically grounded AI perception disparity map, precisely localizing trust gaps and providing actionable evidence for value alignment and differentiated governance frameworks.
📝 Abstract
Artificial Intelligence (AI) is transforming diverse societal domains, raising critical questions about its risks and benefits and the misalignments between public expectations and academic visions. This study examines how the general public (N=1110) -- people using or being affected by AI -- and academic AI experts (N=119) -- people shaping AI development -- perceive AI's capabilities and impact across 71 scenarios, including sustainability, healthcare, job performance, societal divides, art, and warfare. Participants evaluated each scenario on four dimensions: expected probability, perceived risk and benefit, and overall sentiment (or value). The findings reveal significant quantitative differences: experts anticipate higher probabilities, perceive lower risks, report greater utility, and express more favorable sentiment toward AI compared to the non-experts. Notably, risk-benefit tradeoffs differ: the public assigns risk half the weight of benefits, while experts assign it only a third. Visual maps of these evaluations highlight areas of convergence and divergence, identifying potential sources of public concern. These insights offer actionable guidance for researchers and policymakers to align AI development with societal values, fostering public trust and informed governance.