Turing Representational Similarity Analysis (RSA): A Flexible Method for Measuring Alignment Between Human and Artificial Intelligence

📅 2024-11-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the quantification of alignment between large language models (LLMs) and human cognitive representations. We propose Turing RSA—a task-agnostic alignment metric grounded in human pairwise similarity judgments—enabling cross-modal (text/image) and dual-level (group/individual) evaluation. Methodologically, it integrates representational similarity analysis (RSA), cross-modal embedding alignment, and behavioral data modeling, validated across lexical, sentential, and visual granularities. Our results reveal that while state-of-the-art LLMs (e.g., GPT-4o) achieve near-human semantic alignment at the group level—particularly in text modality—they systematically fail to capture inter-individual variability in human similarity judgments. Turing RSA thus bridges critical gaps in conventional accuracy-based benchmarks by enabling prompt- and hyperparameter-driven modulation of human-aligned similarity, offering a more cognitively grounded framework for evaluating and refining LLM representations.

Technology Category

Application Category

📝 Abstract
As we consider entrusting Large Language Models (LLMs) with key societal and decision-making roles, measuring their alignment with human cognition becomes critical. This requires methods that can assess how these systems represent information and facilitate comparisons to human understanding across diverse tasks. To meet this need, we developed Turing Representational Similarity Analysis (RSA), a method that uses pairwise similarity ratings to quantify alignment between AIs and humans. We tested this approach on semantic alignment across text and image modalities, measuring how different Large Language and Vision Language Model (LLM and VLM) similarity judgments aligned with human responses at both group and individual levels. GPT-4o showed the strongest alignment with human performance among the models we tested, particularly when leveraging its text processing capabilities rather than image processing, regardless of the input modality. However, no model we studied adequately captured the inter-individual variability observed among human participants. This method helped uncover certain hyperparameters and prompts that could steer model behavior to have more or less human-like qualities at an inter-individual or group level. Turing RSA enables the efficient and flexible quantification of human-AI alignment and complements existing accuracy-based benchmark tasks. We demonstrate its utility across multiple modalities (words, sentences, images) for understanding how LLMs encode knowledge and for examining representational alignment with human cognition.
Problem

Research questions and friction points this paper is trying to address.

Measure alignment between AI and human cognition.
Quantify similarity in information representation across tasks.
Assess AI models' ability to match human variability.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Turing RSA measures AI-human alignment via similarity ratings.
Tests semantic alignment across text and image modalities.
Identifies hyperparameters influencing human-like AI behavior.
🔎 Similar Papers
No similar papers found.