🤖 AI Summary
This paper addresses the zero-shot inverse reconstruction problem for black-box text embeddings: recovering semantically equivalent original text from an embedding vector, without access to training data, model fine-tuning, or internal parameters—relying solely on encoder API queries. We propose a general adversarial decoding framework that jointly incorporates gradient surrogate estimation and semantic regularization during search, enabling the first cross-model zero-shot text reconstruction across diverse encoders (e.g., BERT, SBERT, E5, BGE). Extensive evaluation on mainstream embedding models demonstrates substantial improvements in key semantic recovery accuracy and achieves over 10× higher query efficiency compared to state-of-the-art methods like vec2text. Our approach establishes a new paradigm for assessing embedding semantic fidelity and analyzing information leakage risks in vector databases.
📝 Abstract
Embedding inversion, i.e., reconstructing text given its embedding and black-box access to the embedding encoder, is a fundamental problem in both NLP and security. From the NLP perspective, it helps determine how much semantic information about the input is retained in the embedding. From the security perspective, it measures how much information is leaked by vector databases and embedding-based retrieval systems. State-of-the-art methods for embedding inversion, such as vec2text, have high accuracy but require (a) training a separate model for each embedding, and (b) a large number of queries to the corresponding encoder. We design, implement, and evaluate ZSInvert, a zero-shot inversion method based on the recently proposed adversarial decoding technique. ZSInvert is fast, query-efficient, and can be used for any text embedding without training an embedding-specific inversion model. We measure the effectiveness of ZSInvert on several embeddings and demonstrate that it recovers key semantic information about the corresponding texts.