Measuring the Quality of Answers in Political Q&As with Large Language Models

📅 2024-04-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of evaluating answer quality in political question-answering. We propose a novel paradigm grounded in semantic identifiability: answer quality is modeled as an unsupervised semantic retrieval task, wherein a fine-tuned large language model quantifies an answer’s discriminative capacity within a candidate set—thereby measuring its relevance and depth of response. Our key contribution is the first human-annotation-free, automated evaluation metric for political QA, which additionally uncovers systematic evasiveness patterns in political discourse. Empirical validation on Canadian parliamentary question period data shows that answers achieve moderate-to-high overall relevance—significantly outperforming random baselines—and exhibit statistically strong associations with the questioner’s political party affiliation and question topic. This framework provides a scalable, interpretable methodological tool for computational analysis of political communication.

Technology Category

Application Category

📝 Abstract
This article proposes a new approach for assessing the quality of answers in political question-and-answer sessions. Our methodology consists of measuring the quality of an answer based on how easily and accurately it can be recognized in a random set of candidate answers given the question's text. This measure reflects the answer's relevance and depth of engagement with the question. Like semantic search, this approach can be implemented by training a language model on the corpus of observed questions and answers without additional human-labeled data. We showcase and validate our methodology within the context of the Question Period in the Canadian House of Commons. Our analysis reveals that while some answers have a weak semantic connection to questions, suggesting some evasion or obfuscation, answers are generally at least moderately relevant, far surpassing what would be expected from random replies. Our analysis also provides valuable insights into the correlates of answer quality: we find significant correlations with the party affiliation of the members of Parliament asking the questions and the topic of the questions.
Problem

Research questions and friction points this paper is trying to address.

Assessing political answer quality
Measuring relevance without labeled data
Correlating answer quality with party and topic
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Semantic Search Technique
No Human-labeled Data
🔎 Similar Papers
No similar papers found.
R
R. M. Alvarez
Division of the Humanities and Social Sciences, California Institute of Technology
J
Jacob Morrier
Division of the Humanities and Social Sciences, California Institute of Technology