RLSF: Reinforcement Learning via Symbolic Feedback

📅 2024-05-26
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited capability in domain-specific reasoning and logical alignment; conventional fine-tuning struggles to integrate symbolic knowledge, while reward modeling relies on sparse and unreliable scalar rewards. This paper introduces *Reasoning with Symbolic Feedback* (RSF), a novel reinforcement learning paradigm that— for the first time—leverages structured, verifiable certificates generated by non-differentiable symbolic reasoning tools (e.g., theorem provers, chemical solvers, knowledge bases) as fine-grained (token-level), interpretable supervision signals to train a sound, transparent reward model—eliminating dependence on black-box reward models and human preference data. RSF integrates an RL framework with a certificate-driven policy optimization algorithm. Experiments demonstrate that RSF consistently outperforms RLHF and other baselines across program synthesis, three chemical reasoning tasks, and the 24-game. Notably, open-source models fine-tuned via RSF surpass GPT-4 in domain-specific performance.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Human Feedback (RLHF) is considered a standard approach to fine-tuning Large Language Models (LLMs). However, such methods often face limitations such as unsound black-box reward models, difficulties in collecting human preference data, and the reliance on sparse scalar rewards. These methods often fall short when applied to tasks that require complex domain-specific understanding. To address these challenges, we propose a new fine-tuning paradigm we refer to as Reinforcement Learning via Symbolic Feedback (RLSF), which aims to improve domain-specific understanding of LLMs more effectively than traditional reward signals. In the RLSF setting, the LLM being fine-tuned is considered an RL agent, while the environment is allowed access to reasoning or domain knowledge tools (e.g., solvers, provers, algebra systems, or knowledge bases). Crucially, in RLSF, these reasoning tools can provide feedback to the LLMs via poly-sized certificates (e.g., proofs), that characterize errors in the LLM-generated object with respect to some correctness specification. As a bonus, our RLSF approach does not require the reasoning systems we use to be differentiable. The ability of RLSF-based fine-tuning to leverage certificate-generating symbolic tools enables sound fine-grained (token-level) reward signals to LLMs, and thus addresses the limitations of traditional reward models mentioned above. Via extensive evaluations, we show that our RLSF-based fine-tuning of LLMs outperforms traditional approaches on five different applications, namely, program synthesis from natural language pseudo-code to programming language, three chemistry tasks, and solving the Game of 24. A takeaway is that fine-tuning via RLSF enables relatively smaller LLMs to significantly outperform closed-source models that are orders of magnitude larger (e.g., GPT-4).
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs' domain-specific reasoning with symbolic feedback
Address sparse rewards in traditional fine-tuning methods
Bridge symbolic reasoning and LLM fine-tuning for precise alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses symbolic tools for fine-grained feedback
Leverages poly-sized certificates for error correction
Bridges symbolic reasoning with LLM fine-tuning
🔎 Similar Papers
No similar papers found.