Towards New Benchmark for AI Alignment&Sentiment Analysis in Socially Important Issues: A Comparative Study of Human and LLMs in the Context of AGI

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the affective understanding and dynamic evolution mechanisms of large language models (LLMs) regarding societally critical topics such as artificial general intelligence (AGI), aiming to advance scientifically grounded affective evaluation in AI alignment. Method: Leveraging Likert-scale–based human–AI comparative experiments, we systematically assess affective tendencies and three-day temporal dynamics across seven mainstream LLMs (e.g., GPT-4, Bard) and three human cohorts. Contribution/Results: We first reveal significant heterogeneity in LLM affective distributions on AGI—alongside quantifiable temporal evolution (evolution rate differences: 1.03%–8.21%)—and find that LLMs’ mean affective scores (3.32–4.12/5) significantly exceed the human average (2.97/5), exposing latent biases and conflict-of-interest risks. We propose the “human-like but non-uniform” hypothesis for LLM affect formation and introduce the first AI affective alignment benchmark tailored to societally salient issues.

Technology Category

Application Category

📝 Abstract
With the expansion of neural networks, such as large language models, humanity is exponentially heading towards superintelligence. As various AI systems are increasingly integrated into the fabric of societies-through recommending values, devising creative solutions, and making decisions-it becomes critical to assess how these AI systems impact humans in the long run. This research aims to contribute towards establishing a benchmark for evaluating the sentiment of various Large Language Models in socially importan issues. The methodology adopted was a Likert scale survey. Seven LLMs, including GPT-4 and Bard, were analyzed and compared against sentiment data from three independent human sample populations. Temporal variations in sentiment were also evaluated over three consecutive days. The results highlighted a diversity in sentiment scores among LLMs, ranging from 3.32 to 4.12 out of 5. GPT-4 recorded the most positive sentiment score towards AGI, whereas Bard was leaning towards the neutral sentiment. The human samples, contrastingly, showed a lower average sentiment of 2.97. The temporal comparison revealed differences in sentiment evolution between LLMs in three days, ranging from 1.03% to 8.21%. The study's analysis outlines the prospect of potential conflicts of interest and bias possibilities in LLMs' sentiment formation. Results indicate that LLMs, akin to human cognitive processes, could potentially develop unique sentiments and subtly influence societies' perceptions towards various opinions formed within the LLMs.
Problem

Research questions and friction points this paper is trying to address.

Emotional Understanding
Decision Making
Artificial Intelligence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Likert Scale
Emotional Bias in AI
Large Language Models Analysis