The Hidden Risks of LLM-Generated Web Application Code: A Security-Centric Evaluation of Code Generation Capabilities in Large Language Models

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the security compliance of web code generated by mainstream large language models (LLMs)—including ChatGPT, DeepSeek, Claude, Gemini, and Grok—across critical domains: authentication mechanisms, session management, input validation, and HTTP security headers. Method: We establish a unified security benchmark grounded in OWASP Top 10 and CWE, employing multi-dimensional verification via static analysis, dynamic testing, and expert manual auditing under standardized prompting and configuration conditions. Contribution/Results: Our empirical cross-model analysis reveals that all evaluated LLMs consistently generate code containing high-severity vulnerabilities; none satisfy industry-recognized secure coding best practices. We propose a novel human-in-the-loop paradigm—“collaborative human–LLM review integrated with a domain-specific security assessment framework”—and rigorously demonstrate that LLM-generated code must undergo rigorous, specialized security review before production deployment. The study delivers a reproducible methodology and empirical evidence to inform security governance in AI-augmented software development.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Language Models (LLMs) has enhanced software development processes, minimizing the time and effort required for coding and enhancing developer productivity. However, despite their potential benefits, code generated by LLMs has been shown to generate insecure code in controlled environments, raising critical concerns about their reliability and security in real-world applications. This paper uses predefined security parameters to evaluate the security compliance of LLM-generated code across multiple models, such as ChatGPT, DeepSeek, Claude, Gemini and Grok. The analysis reveals critical vulnerabilities in authentication mechanisms, session management, input validation and HTTP security headers. Although some models implement security measures to a limited extent, none fully align with industry best practices, highlighting the associated risks in automated software development. Our findings underscore that human expertise is crucial to ensure secure software deployment or review of LLM-generated code. Also, there is a need for robust security assessment frameworks to enhance the reliability of LLM-generated code in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluating security risks in LLM-generated web application code
Identifying vulnerabilities in authentication and input validation
Highlighting need for human review in secure code deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLM-generated code security compliance
Reveals vulnerabilities in authentication and validation
Advocates human review for secure deployment
🔎 Similar Papers
No similar papers found.
S
Swaroop Dora
Department of IT, IIIT Allahabad, India
D
Deven Lunkad
Department of ECE, IIIT Allahabad, India
N
Naziya Aslam
Department of IT, IIIT Allahabad, India
S
S. Venkatesan
Department of IT, IIIT Allahabad, India
Sandeep Kumar Shukla
Sandeep Kumar Shukla
International Institute of Information Technology (IIIT) Hyderabad
Cyber SecurityBlockchainFormal MethodsVLSIFormal Verification