🤖 AI Summary
Large language models (LLMs) for code generation often introduce security vulnerabilities despite producing functionally correct programs, and existing security-enhancing approaches remain limited in effectiveness and lack insight into the models’ internal mechanisms. This work reveals, for the first time, that code LLMs inherently possess the capacity to distinguish fine-grained security-related subconcepts within their internal representations. Building on this insight, we propose a lightweight, concept-driven guidance mechanism that dynamically adjusts token-level internal representations during decoding, steering the model toward generating code that is both secure and functionally correct—without requiring retraining. Experimental results demonstrate that our approach significantly outperforms state-of-the-art methods across multiple secure coding benchmarks, effectively balancing security and functional correctness.
📝 Abstract
Large Language Models (LLMs) show remarkable capabilities in understanding natural language and generating complex code. However, as practitioners adopt CodeLLMs for increasingly critical development tasks, research reveals that these models frequently generate functionally correct yet insecure code, posing significant security risks. While multiple approaches have been proposed to improve security in AI-based code generation, combined benchmarks show these methods remain insufficient for practical use, achieving only limited improvements in both functional correctness and security. This stems from a fundamental gap in understanding the internal mechanisms of code generation and the root causes of security vulnerabilities, forcing researchers to rely on heuristics and empirical observations. In this work, we investigate the internal representation of security concepts in CodeLLMs, revealing that models are often aware of vulnerabilities as they generate insecure code. Through systematic evaluation, we demonstrate that CodeLLMs can distinguish between security subconcepts, enabling a more fine-grained analysis than prior black-box approaches. Leveraging these insights, we propose Secure Concept Steering for CodeLLMs (SCS-Code). During token generation, SCS-Code steers LLMs' internal representations toward secure and functional code output, enabling a lightweight and modular mechanism that can be integrated into existing code models. Our approach achieves superior performance compared to state-of-the-art methods across multiple secure coding benchmarks.