From Model to Breach: Towards Actionable LLM-Generated Vulnerabilities Reporting

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the insufficient risk assessment of security vulnerabilities introduced by LLM-based programming assistants. We propose the first risk-aware evaluation framework integrating vulnerability severity, generation probability, and prompt exposure (PE)—a novel metric quantifying the susceptibility of vulnerabilities to adversarial prompting. We further introduce model exposure (ME) to measure vulnerability prevalence across models. Empirical analysis reveals that even for long-disclosed vulnerabilities, mainstream open-source code-generation models remain significantly susceptible, confirming a fundamental trade-off between security and functionality. Our contributions are threefold: (1) formal definition and empirical validation of the PE/ME dual-metric framework; (2) establishment of an actionable vulnerability prioritization mechanism grounded in quantitative risk estimation; and (3) identification of critical limitations in current security hardening techniques under realistic prompt distributions—thereby providing both theoretical foundations and practical guidance for targeted remediation of high-risk vulnerabilities.

Technology Category

Application Category

📝 Abstract
As the role of Large Language Models (LLM)-based coding assistants in software development becomes more critical, so does the role of the bugs they generate in the overall cybersecurity landscape. While a number of LLM code security benchmarks have been proposed alongside approaches to improve the security of generated code, it remains unclear to what extent they have impacted widely used coding LLMs. Here, we show that even the latest open-weight models are vulnerable in the earliest reported vulnerability scenarios in a realistic use setting, suggesting that the safety-functionality trade-off has until now prevented effective patching of vulnerabilities. To help address this issue, we introduce a new severity metric that reflects the risk posed by an LLM-generated vulnerability, accounting for vulnerability severity, generation chance, and the formulation of the prompt that induces vulnerable code generation - Prompt Exposure (PE). To encourage the mitigation of the most serious and prevalent vulnerabilities, we use PE to define the Model Exposure (ME) score, which indicates the severity and prevalence of vulnerabilities a model generates.
Problem

Research questions and friction points this paper is trying to address.

Assessing cybersecurity risks from LLM-generated code vulnerabilities in development
Measuring vulnerability severity considering generation probability and prompt exposure
Evaluating model exposure scores for prevalent and serious security flaws
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Prompt Exposure metric for vulnerability risk
Defines Model Exposure score for severity assessment
Quantifies vulnerability generation chance and prompt influence
🔎 Similar Papers
No similar papers found.
C
Cyril Vallez
IEM, HES-SO Valais-Wallis, Switzerland
A
Alexander Sternfeld
IEM, HES-SO Valais-Wallis, Switzerland
Andrei Kucharavy
Andrei Kucharavy
Assistant Professor, HES-SO Valais-Wallis
Machine LearningEvolutionDistributed ComputationComputational Biology
L
Ljiljana Dolamic
Cyber-Defence Campus, armasuisse, Switzerland