Solving Situation Puzzles with Large Language Model and External Reformulation

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often suffer from repetitive questioning and local guessing in multi-turn situational puzzle solving, exhibiting insufficient strategic reasoning and robustness. Method: This paper proposes an external dynamic reformulation paradigm that, at critical dialogue turns, leverages a lightweight rule- or heuristic-based module to semantically rephrase the problem state—guided by dialogue history and feedback—to break LLM reasoning impasses. Crucially, this mechanism is timing-aware and operates externally, requiring no LLM fine-tuning. Contribution/Results: To our knowledge, this is the first work to integrate such an external, timing-sensitive reformulation mechanism into interactive reasoning frameworks. Experiments on a standard situational puzzle benchmark demonstrate that our approach reduces the average number of query turns by 37% and improves win rate by 28% over strong baselines, significantly enhancing convergence efficiency and solution stability.

Technology Category

Application Category

📝 Abstract
In recent years, large language models (LLMs) have shown an impressive ability to perform arithmetic and symbolic reasoning tasks. However, we found that LLMs (e.g., ChatGPT) cannot perform well on reasoning that requires multiple rounds of dialogue, especially when solving situation puzzles. Specifically, LLMs intend to ask very detailed questions focusing on a specific aspect or same/similar questions after several rounds of Q&As. To help LLMs get out of the above dilemma, we propose a novel external reformulation methodology, where the situation puzzle will be reformulated after several rounds of Q&A or when the LLMs raise an incorrect guess. Experiments show superior performance (e.g., win rate, number of question/guess attempts) of our method than directly using LLMs for solving situation puzzles, highlighting the potential of strategic problem reformulation to enhance the reasoning capabilities of LLMs in complex interactive scenarios.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with multi-round dialogue reasoning in puzzles
Propose external reformulation to improve LLM puzzle-solving
Enhance LLM reasoning in complex interactive scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large language models for reasoning tasks
Introduces external reformulation methodology
Enhances LLMs' performance in dialogue puzzles
🔎 Similar Papers
No similar papers found.
K
Kun Li
University of Illinois Urbana-Champaign
X
Xinwei Chen
University of Illinois at Urbana Champaign
Tianyou Song
Tianyou Song
Columbia University
Machine Learning
Chengrui Zhou
Chengrui Zhou
Columbia University
Z
Zhuoran Liu
Carnegie Mellon University
Z
Zhenyan Zhang
Carnegie Mellon University
J
Jiangjian Guo
University of California San Diego
Q
Qing Shan
Northeastern University