🤖 AI Summary
This paper identifies HACKODE, a novel supply-chain attack targeting LLM-based coding assistants that rely on external documentation (e.g., Stack Overflow, API references) for code generation. The attack exploits the models’ dependency on such sources by poisoning high-ranking documents with malicious content—inducing the embedding of critical security vulnerabilities, including buffer overflows and missing input validation. The authors are the first to systematically discover and empirically validate this “external knowledge poisoning → vulnerability injection” mechanism. They propose a generalizable adversarial attack framework that integrates prompt engineering with targeted malicious document injection. Evaluated across four major models—GPT-4, Claude, CodeLlama, and StarCoder—the attack achieves an average success rate of 84.29%; in realistic IDE plugin settings, it reaches 75.92%. These results demonstrate HACKODE’s significant practical threat and cross-platform applicability.
📝 Abstract
Due to insufficient domain knowledge, LLM coding assistants often reference related solutions from the Internet to address programming problems. However, incorporating external information into LLMs' code generation process introduces new security risks. In this paper, we reveal a real-world threat, named HACKODE, where attackers exploit referenced external information to embed attack sequences, causing LLMs to produce code with vulnerabilities such as buffer overflows and incomplete validations. We designed a prototype of the attack, which generates effective attack sequences for potential diverse inputs with various user queries and prompt templates. Through the evaluation on two general LLMs and two code LLMs, we demonstrate that the attack is effective, achieving an 84.29% success rate. Additionally, on a real-world application, HACKODE achieves 75.92% ASR, demonstrating its real-world impact.