🤖 AI Summary
This study investigates cross-national differences in public expectations regarding AI alignment—specifically safety, fairness, debiasing, and value guidance—between Germany and the United States. Drawing on a large-scale, multinational survey of 3,556 respondents, the analysis employs multivariate regression and hierarchical modeling to empirically compare citizen preferences and their underlying determinants. Results reveal significantly higher public support for all four alignment dimensions in the U.S., whereas German respondents exhibit greater caution—particularly concerning fairness and ideal-value guidance. In Germany, AI usage frequency and attitudes toward free speech emerge as stronger predictors of alignment preferences, while U.S. attitudes demonstrate higher internal consistency. Moving beyond technocentric paradigms, this research provides the first empirical, value-sensitive foundation for comparative AI governance. It advances alignment standards from expert-defined technical specifications toward socially negotiated, democratically grounded frameworks.
📝 Abstract
Recent advances in generative Artificial Intelligence have raised public awareness, shaping expectations and concerns about their societal implications. Central to these debates is the question of AI alignment -- how well AI systems meet public expectations regarding safety, fairness, and social values. However, little is known about what people expect from AI-enabled systems and how these expectations differ across national contexts. We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany (n = 1800) and the United States (n = 1756). We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries. U.S. respondents report significantly higher AI use and consistently greater support for all alignment features, reflecting broader technological openness and higher societal involvement with AI. In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals -- like fairness and aspirational imaginaries -- receive more cautious backing, particularly in Germany. We also explore how individual experience with AI, attitudes toward free speech, political ideology, partisan affiliation, and gender shape these preferences. AI use and free speech support explain more variation in Germany. In contrast, U.S. responses show greater attitudinal uniformity, suggesting that higher exposure to AI may consolidate public expectations. These findings contribute to debates on AI governance and cross-national variation in public preferences. More broadly, our study demonstrates the value of empirically grounding AI alignment debates in public attitudes and of explicitly developing normatively grounded expectations into theoretical and policy discussions on the governance of AI-generated content.