DS@GT at Touché: Large Language Models for Retrieval-Augmented Debate

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic evaluation of large language models (LLMs) in retrieval-augmented structured debate, specifically assessing their argument utilization and response quality evaluation capabilities. Method: We construct an end-to-end framework integrating six mainstream open-source LLMs with retrieval augmentation, semantic matching, and dialogue management techniques for debate generation and automated assessment. Contribution/Results: We propose the first four-dimensional evaluation framework—covering quality, quantity, expressiveness, and argument coherence—and empirically demonstrate that LLMs effectively retrieve and integrate relevant arguments to generate logically consistent responses. Evaluation results exhibit high stability but also a tendency toward redundancy. Our core contribution is establishing a reproducible, structured debate evaluation paradigm and empirically validating that retrieval augmentation significantly enhances the rigor and credibility of LLM-generated debates.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) demonstrate strong conversational abilities. In this Working Paper, we study them in the context of debating in two ways: their ability to perform in a structured debate along with a dataset of arguments to use and their ability to evaluate utterances throughout the debate. We deploy six leading publicly available models from three providers for the Retrieval-Augmented Debate and Evaluation. The evaluation is performed by measuring four key metrics: Quality, Quantity, Manner, and Relation. Throughout this task, we found that although LLMs perform well in debates when given related arguments, they tend to be verbose in responses yet consistent in evaluation. The accompanying source code for this paper is located at https://github.com/dsgt-arc/touche-2025-rad.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' performance in structured debate settings
Assessing LLMs' ability to retrieve and use debate arguments
Measuring debate quality via Quality, Quantity, Manner, Relation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for retrieval-augmented debate performance
Six models evaluated on four key metrics
Verbose responses but consistent evaluation
🔎 Similar Papers