Do LLMs Consider Security? An Empirical Study on Responses to Programming Questions

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the security awareness and response capabilities of mainstream large language models (LLMs)—Claude 3, GPT-4, and Llama 3—when answering Stack Overflow programming questions containing vulnerable code. Methodologically, we conduct an empirical, multi-dimensional, and reproducible evaluation across 12 common vulnerability categories, quantifying detection rates (12.6%–40%) and analyzing warning quality against community responses. Results show LLMs exhibit relatively high detection accuracy for specific vulnerabilities (e.g., sensitive data leakage) but low overall recall; their security warnings, however, are more comprehensive and explanatory than Stack Overflow’s. We further propose a CLI-based prompt-enhancement tool integrating vulnerability classification and human-annotated validation, significantly improving LLMs’ proactive warning generation. This work provides the first reproducible, quantitative assessment of LLMs’ code-security responsiveness, establishing an empirical foundation and practical methodology for security-aware prompt engineering and model alignment.

Technology Category

Application Category

📝 Abstract
The widespread adoption of conversational LLMs for software development has raised new security concerns regarding the safety of LLM-generated content. Our motivational study outlines ChatGPT's potential in volunteering context-specific information to the developers, promoting safe coding practices. Motivated by this finding, we conduct a study to evaluate the degree of security awareness exhibited by three prominent LLMs: Claude 3, GPT-4, and Llama 3. We prompt these LLMs with Stack Overflow questions that contain vulnerable code to evaluate whether they merely provide answers to the questions or if they also warn users about the insecure code, thereby demonstrating a degree of security awareness. Further, we assess whether LLM responses provide information about the causes, exploits, and the potential fixes of the vulnerability, to help raise users' awareness. Our findings show that all three models struggle to accurately detect and warn users about vulnerabilities, achieving a detection rate of only 12.6% to 40% across our datasets. We also observe that the LLMs tend to identify certain types of vulnerabilities related to sensitive information exposure and improper input neutralization much more frequently than other types, such as those involving external control of file names or paths. Furthermore, when LLMs do issue security warnings, they often provide more information on the causes, exploits, and fixes of vulnerabilities compared to Stack Overflow responses. Finally, we provide an in-depth discussion on the implications of our findings and present a CLI-based prompting tool that can be used to generate significantly more secure LLM responses.
Problem

Research questions and friction points this paper is trying to address.

Assess LLMs' security awareness in coding
Evaluate LLMs' vulnerability detection and warnings
Improve secure LLM responses via CLI tool
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLMs on security awareness.
Uses Stack Overflow for vulnerability testing.
Develops CLI tool for secure LLM responses.
🔎 Similar Papers
No similar papers found.