🤖 AI Summary
Large language models (LLMs) exhibit significant performance degradation on sequential optimization problems (SOPs) as problem complexity increases.
Method: We propose a philosophy-driven reasoning enhancement paradigm, introducing WorldGen—a controllable-complexity dynamic SOP generation framework—and formalizing Hegelian dialectical logic into ACE (Abstraction–Contradiction–Elimination), a training-free, zero-fine-tuning reasoning paradigm that integrates chain-of-thought reasoning with reflective prompt engineering.
Contribution/Results: This work pioneers the systematic integration of philosophical principles into LLM inference pipelines, enabling substantial zero-shot SOP solving capability gains. Evaluated across diverse SOP benchmarks, ACE achieves an average accuracy improvement of 37.2% over strong baselines, demonstrating its effectiveness, generalizability, and interpretability. The approach requires no parameter updates or task-specific training, offering a novel, principled pathway for enhancing LLM reasoning in combinatorial optimization domains.
📝 Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities across numerous fields, presenting an opportunity to revolutionize optimization problem-solving, a crucial, ubiquitous, and complex domain. This paper explores the proficiency of LLMs in handling Sequential Optimization Problems (SOPs). We introduce WorldGen, a dynamic framework for generating unseen SOPs with controllable complexities, to evaluate LLM performance. Our initial observations reveal that while LLMs perform well on simple SOPs, their performance significantly degrades with increased complexity. Motivated by this, we revisit philosophical hypotheses on reasoning to enhance LLM performance. Inspired by the influential framework of Hegelian Dialectics, we propose ACE, demonstrating how the performance of LLMs in SOP contexts can be significantly improved without any retraining or further fine-tuning.