GPT, But Backwards: Exactly Inverting Language Model Outputs

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the exact inverse reconstruction problem—recovering the precise input prompt from a large language model’s (LLM) output—to support post-hoc auditing and detection of hallucinated outputs. We formally define this task as a discrete optimization problem with a unique global optimum. To solve it efficiently, we propose SODA: a novel algorithm that employs continuous relaxation of the input space, gradient-driven inversion of next-token logits, periodic restarts, and parameter decay. Experiments across models ranging from 33M to 3B parameters show that SODA achieves a 79.5% full recovery rate for out-of-distribution short inputs (≤15 tokens) with zero false positives; however, longer inputs—especially those containing private information—prove largely unrecoverable, indicating inherent resistance to inversion attacks in current LLM deployments. This work establishes the first theoretically rigorous and empirically effective framework for auditing LLMs via input reconstruction.

Technology Category

Application Category

📝 Abstract
While existing auditing techniques attempt to identify potential unwanted behaviours in large language models (LLMs), we address the complementary forensic problem of reconstructing the exact input that led to an existing LLM output - enabling post-incident analysis and potentially the detection of fake output reports. We formalize exact input reconstruction as a discrete optimisation problem with a unique global minimum and introduce SODA, an efficient gradient-based algorithm that operates on a continuous relaxation of the input search space with periodic restarts and parameter decay. Through comprehensive experiments on LLMs ranging in size from 33M to 3B parameters, we demonstrate that SODA significantly outperforms existing approaches. We succeed in fully recovering 79.5% of shorter out-of-distribution inputs from next-token logits, without a single false positive, but struggle to extract private information from the outputs of longer (15+ token) input sequences. This suggests that standard deployment practices may currently provide adequate protection against malicious use of our method. Our code is available at https://doi.org/10.5281/zenodo.15539879.
Problem

Research questions and friction points this paper is trying to address.

Reconstruct exact input from LLM output
Detect fake output reports via inversion
Assess privacy risks in longer sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inverts LLM outputs via discrete optimization
Uses SODA algorithm with gradient-based search
Recovers inputs from next-token logits effectively
🔎 Similar Papers
No similar papers found.